Test Report: QEMU_macOS 19643

                    
                      17d31f5d116bbb5d9ac8f4a1c2873ea47cdfa40f:2024-09-14:36211
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.09
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.25
22 TestOffline 9.94
33 TestAddons/parallel/Registry 71.33
46 TestCertOptions 10.15
47 TestCertExpiration 195.32
48 TestDockerFlags 10.27
49 TestForceSystemdFlag 10.33
50 TestForceSystemdEnv 10.47
95 TestFunctional/parallel/ServiceCmdConnect 29.05
167 TestMultiControlPlane/serial/StopSecondaryNode 214.12
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.01
169 TestMultiControlPlane/serial/RestartSecondaryNode 183.8
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.26
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.17
184 TestJSONOutput/start/Command 9.99
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.1
216 TestMountStart/serial/StartWithMountFirst 10.13
219 TestMultiNode/serial/FreshStart2Nodes 10.03
220 TestMultiNode/serial/DeployApp2Nodes 99.07
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 38.62
228 TestMultiNode/serial/RestartKeepsNodes 7.35
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.53
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.1
236 TestPreload 10
238 TestScheduledStopUnix 10.11
239 TestSkaffold 12.14
242 TestRunningBinaryUpgrade 601.13
244 TestKubernetesUpgrade 18.78
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.4
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.25
260 TestStoppedBinaryUpgrade/Upgrade 575.66
262 TestPause/serial/Start 9.93
272 TestNoKubernetes/serial/StartWithK8s 9.88
273 TestNoKubernetes/serial/StartWithStopK8s 5.27
274 TestNoKubernetes/serial/Start 5.29
278 TestNoKubernetes/serial/StartNoArgs 5.3
280 TestNetworkPlugins/group/auto/Start 9.79
281 TestNetworkPlugins/group/calico/Start 9.95
282 TestNetworkPlugins/group/custom-flannel/Start 9.82
283 TestNetworkPlugins/group/false/Start 9.85
284 TestNetworkPlugins/group/kindnet/Start 9.81
285 TestNetworkPlugins/group/flannel/Start 9.85
286 TestNetworkPlugins/group/enable-default-cni/Start 9.91
287 TestNetworkPlugins/group/bridge/Start 9.82
288 TestNetworkPlugins/group/kubenet/Start 9.91
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.32
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.93
304 TestStartStop/group/embed-certs/serial/FirstStart 10.56
305 TestStartStop/group/no-preload/serial/DeployApp 0.1
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.15
309 TestStartStop/group/no-preload/serial/SecondStart 5.47
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/no-preload/serial/Pause 0.11
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.16
320 TestStartStop/group/embed-certs/serial/SecondStart 7.23
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
326 TestStartStop/group/embed-certs/serial/Pause 0.11
329 TestStartStop/group/newest-cni/serial/FirstStart 9.93
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.73
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.09
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-612000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-612000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.090893292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02ac9cb4-ba04-4bc4-b3d9-0e8e76b6592b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-612000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a531a24c-6d83-4a7f-8ad0-a91b6ecfdae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"0b5ec86c-506b-4049-8729-5b23fa2f7a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig"}}
	{"specversion":"1.0","id":"e06ff6e6-5b15-4448-aa9f-e358ac8b2359","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"fcc83068-168f-47af-b2bf-4100df4561ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddaa8937-8581-469c-8efa-13b5b99aa460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube"}}
	{"specversion":"1.0","id":"b232eea4-8eaa-413a-aad1-8def9d23c183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0032ce7d-9398-4ffe-8fd7-e4c894f2e2a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2bd53b2-de45-4698-9469-491acc1ed6b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4524c6b6-e122-4c3f-91d1-cc15e43db750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db76b1a7-9af2-44fe-8372-dc830a87443e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-612000\" primary control-plane node in \"download-only-612000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a991d82-4848-4a67-b045-9a68219628b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7f46f18-2e7c-4725-b770-1d162e957ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780] Decompressors:map[bz2:0x1400000eb80 gz:0x1400000eb88 tar:0x1400000eb30 tar.bz2:0x1400000eb40 tar.gz:0x1400000eb50 tar.xz:0x1400000eb60 tar.zst:0x1400000eb70 tbz2:0x1400000eb40 tgz:0x14
00000eb50 txz:0x1400000eb60 tzst:0x1400000eb70 xz:0x1400000eb90 zip:0x1400000eba0 zst:0x1400000eb98] Getters:map[file:0x14000110770 http:0x1400013c0a0 https:0x1400013c0f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"edd3cb78-77d5-49bd-bf68-b2082ba657d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 09:42:53.948923    1605 out.go:345] Setting OutFile to fd 1 ...
	I0914 09:42:53.949092    1605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:42:53.949095    1605 out.go:358] Setting ErrFile to fd 2...
	I0914 09:42:53.949097    1605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:42:53.949226    1605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	W0914 09:42:53.949315    1605 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19643-1079/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19643-1079/.minikube/config/config.json: no such file or directory
	I0914 09:42:53.950552    1605 out.go:352] Setting JSON to true
	I0914 09:42:53.968000    1605 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":736,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 09:42:53.968076    1605 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 09:42:53.974437    1605 out.go:97] [download-only-612000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 09:42:53.974593    1605 notify.go:220] Checking for updates...
	W0914 09:42:53.974679    1605 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 09:42:53.978468    1605 out.go:169] MINIKUBE_LOCATION=19643
	I0914 09:42:53.985577    1605 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:42:53.989337    1605 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 09:42:53.992439    1605 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 09:42:53.995500    1605 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	W0914 09:42:54.001360    1605 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 09:42:54.001565    1605 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 09:42:54.006459    1605 out.go:97] Using the qemu2 driver based on user configuration
	I0914 09:42:54.006478    1605 start.go:297] selected driver: qemu2
	I0914 09:42:54.006492    1605 start.go:901] validating driver "qemu2" against <nil>
	I0914 09:42:54.006564    1605 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 09:42:54.009471    1605 out.go:169] Automatically selected the socket_vmnet network
	I0914 09:42:54.015156    1605 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 09:42:54.015246    1605 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 09:42:54.015293    1605 cni.go:84] Creating CNI manager for ""
	I0914 09:42:54.015325    1605 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 09:42:54.015367    1605 start.go:340] cluster config:
	{Name:download-only-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:42:54.020987    1605 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 09:42:54.028954    1605 out.go:97] Downloading VM boot image ...
	I0914 09:42:54.028969    1605 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso
	I0914 09:43:01.545499    1605 out.go:97] Starting "download-only-612000" primary control-plane node in "download-only-612000" cluster
	I0914 09:43:01.545525    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:01.601812    1605 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 09:43:01.601830    1605 cache.go:56] Caching tarball of preloaded images
	I0914 09:43:01.601973    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:01.607105    1605 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 09:43:01.607112    1605 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:01.684228    1605 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 09:43:06.741479    1605 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:06.741646    1605 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:07.438440    1605 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 09:43:07.438654    1605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-612000/config.json ...
	I0914 09:43:07.438670    1605 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-612000/config.json: {Name:mk48aad586bee83c51d5ade0281ee793bb948236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:07.438892    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:07.439087    1605 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0914 09:43:07.959728    1605 out.go:193] 
	W0914 09:43:07.965780    1605 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780] Decompressors:map[bz2:0x1400000eb80 gz:0x1400000eb88 tar:0x1400000eb30 tar.bz2:0x1400000eb40 tar.gz:0x1400000eb50 tar.xz:0x1400000eb60 tar.zst:0x1400000eb70 tbz2:0x1400000eb40 tgz:0x1400000eb50 txz:0x1400000eb60 tzst:0x1400000eb70 xz:0x1400000eb90 zip:0x1400000eba0 zst:0x1400000eb98] Getters:map[file:0x14000110770 http:0x1400013c0a0 https:0x1400013c0f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 09:43:07.965802    1605 out_reason.go:110] 
	W0914 09:43:07.976748    1605 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 09:43:07.980737    1605 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-612000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.25s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-023000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-023000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 : exit status 40 (154.550667ms)

                                                
                                                
-- stdout --
	* [binary-mirror-023000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-023000" primary control-plane node in "binary-mirror-023000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 09:43:15.022936    1665 out.go:345] Setting OutFile to fd 1 ...
	I0914 09:43:15.023056    1665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:15.023059    1665 out.go:358] Setting ErrFile to fd 2...
	I0914 09:43:15.023062    1665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:15.023203    1665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 09:43:15.024213    1665 out.go:352] Setting JSON to false
	I0914 09:43:15.040527    1665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":758,"bootTime":1726331437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 09:43:15.040586    1665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 09:43:15.045137    1665 out.go:177] * [binary-mirror-023000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 09:43:15.052121    1665 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 09:43:15.052177    1665 notify.go:220] Checking for updates...
	I0914 09:43:15.059109    1665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:43:15.062034    1665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 09:43:15.065084    1665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 09:43:15.068083    1665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 09:43:15.071294    1665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 09:43:15.076027    1665 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 09:43:15.083080    1665 start.go:297] selected driver: qemu2
	I0914 09:43:15.083086    1665 start.go:901] validating driver "qemu2" against <nil>
	I0914 09:43:15.083158    1665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 09:43:15.086026    1665 out.go:177] * Automatically selected the socket_vmnet network
	I0914 09:43:15.091333    1665 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 09:43:15.091428    1665 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 09:43:15.091449    1665 cni.go:84] Creating CNI manager for ""
	I0914 09:43:15.091473    1665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 09:43:15.091481    1665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 09:43:15.091525    1665 start.go:340] cluster config:
	{Name:binary-mirror-023000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49313 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:43:15.095167    1665 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 09:43:15.104053    1665 out.go:177] * Starting "binary-mirror-023000" primary control-plane node in "binary-mirror-023000" cluster
	I0914 09:43:15.108167    1665 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:15.108185    1665 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 09:43:15.108203    1665 cache.go:56] Caching tarball of preloaded images
	I0914 09:43:15.108289    1665 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 09:43:15.108295    1665 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 09:43:15.108497    1665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/binary-mirror-023000/config.json ...
	I0914 09:43:15.108508    1665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/binary-mirror-023000/config.json: {Name:mka5709163dc784c17d8c5e0b9498bb7844acf51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:15.108870    1665 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:15.108920    1665 download.go:107] Downloading: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0914 09:43:15.125225    1665 out.go:201] 
	W0914 09:43:15.129108    1665 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780] Decompressors:map[bz2:0x14000793aa0 gz:0x14000793aa8 tar:0x14000793a50 tar.bz2:0x14000793a60 tar.gz:0x14000793a70 tar.xz:0x14000793a80 tar.zst:0x14000793a90 tbz2:0x14000793a60 tgz:0x14000793a70 txz:0x14000793a80 tzst:0x14000793a90 xz:0x14000793ab0 zip:0x14000793ac0 zst:0x14000793ab8] Getters:map[file:0x140006d3740 http:0x1400063f180 https:0x1400063f1d0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780 0x1073c1780] Decompressors:map[bz2:0x14000793aa0 gz:0x14000793aa8 tar:0x14000793a50 tar.bz2:0x14000793a60 tar.gz:0x14000793a70 tar.xz:0x14000793a80 tar.zst:0x14000793a90 tbz2:0x14000793a60 tgz:0x14000793a70 txz:0x14000793a80 tzst:0x14000793a90 xz:0x14000793ab0 zip:0x14000793ac0 zst:0x14000793ab8] Getters:map[file:0x140006d3740 http:0x1400063f180 https:0x1400063f1d0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0914 09:43:15.129115    1665 out.go:270] * 
	* 
	W0914 09:43:15.129557    1665 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 09:43:15.144083    1665 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-023000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49313" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-023000
--- FAIL: TestBinaryMirror (0.25s)

                                                
                                    
x
+
TestOffline (9.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-216000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-216000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.783655167s)

                                                
                                                
-- stdout --
	* [offline-docker-216000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-216000" primary control-plane node in "offline-docker-216000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-216000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:28:07.212669    4303 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:28:07.212834    4303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:07.212837    4303 out.go:358] Setting ErrFile to fd 2...
	I0914 10:28:07.212840    4303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:07.212960    4303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:28:07.214060    4303 out.go:352] Setting JSON to false
	I0914 10:28:07.231477    4303 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3450,"bootTime":1726331437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:28:07.231545    4303 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:28:07.237319    4303 out.go:177] * [offline-docker-216000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:28:07.245196    4303 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:28:07.245195    4303 notify.go:220] Checking for updates...
	I0914 10:28:07.252121    4303 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:28:07.255116    4303 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:28:07.258115    4303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:28:07.261138    4303 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:28:07.264109    4303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:28:07.267521    4303 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:07.267586    4303 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:28:07.272103    4303 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:28:07.279131    4303 start.go:297] selected driver: qemu2
	I0914 10:28:07.279141    4303 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:28:07.279151    4303 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:28:07.281068    4303 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:28:07.284121    4303 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:28:07.287178    4303 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:28:07.287196    4303 cni.go:84] Creating CNI manager for ""
	I0914 10:28:07.287217    4303 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:28:07.287221    4303 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:28:07.287254    4303 start.go:340] cluster config:
	{Name:offline-docker-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:28:07.290969    4303 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:28:07.298098    4303 out.go:177] * Starting "offline-docker-216000" primary control-plane node in "offline-docker-216000" cluster
	I0914 10:28:07.302072    4303 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:28:07.302103    4303 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:28:07.302113    4303 cache.go:56] Caching tarball of preloaded images
	I0914 10:28:07.302188    4303 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:28:07.302193    4303 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:28:07.302262    4303 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/offline-docker-216000/config.json ...
	I0914 10:28:07.302273    4303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/offline-docker-216000/config.json: {Name:mk930e717c07e523deb2ad139e584b30c4cbf0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:28:07.302564    4303 start.go:360] acquireMachinesLock for offline-docker-216000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:07.302596    4303 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "offline-docker-216000"
	I0914 10:28:07.302609    4303 start.go:93] Provisioning new machine with config: &{Name:offline-docker-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:07.302643    4303 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:07.306052    4303 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:07.321979    4303 start.go:159] libmachine.API.Create for "offline-docker-216000" (driver="qemu2")
	I0914 10:28:07.322014    4303 client.go:168] LocalClient.Create starting
	I0914 10:28:07.322077    4303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:07.322109    4303 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:07.322118    4303 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:07.322162    4303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:07.322185    4303 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:07.322195    4303 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:07.322634    4303 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:07.485227    4303 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:07.579699    4303 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:07.579708    4303 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:07.580253    4303 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:07.589934    4303 main.go:141] libmachine: STDOUT: 
	I0914 10:28:07.589954    4303 main.go:141] libmachine: STDERR: 
	I0914 10:28:07.590011    4303 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2 +20000M
	I0914 10:28:07.599196    4303 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:07.599225    4303 main.go:141] libmachine: STDERR: 
	I0914 10:28:07.599244    4303 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:07.599249    4303 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:07.599260    4303 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:07.599294    4303 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:f4:52:f9:31:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:07.600967    4303 main.go:141] libmachine: STDOUT: 
	I0914 10:28:07.600982    4303 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:07.601005    4303 client.go:171] duration metric: took 278.996583ms to LocalClient.Create
	I0914 10:28:09.603025    4303 start.go:128] duration metric: took 2.300468291s to createHost
	I0914 10:28:09.603058    4303 start.go:83] releasing machines lock for "offline-docker-216000", held for 2.300556s
	W0914 10:28:09.603072    4303 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:09.607988    4303 out.go:177] * Deleting "offline-docker-216000" in qemu2 ...
	W0914 10:28:09.621333    4303 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:09.621346    4303 start.go:729] Will try again in 5 seconds ...
	I0914 10:28:14.623354    4303 start.go:360] acquireMachinesLock for offline-docker-216000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:14.623752    4303 start.go:364] duration metric: took 316.917µs to acquireMachinesLock for "offline-docker-216000"
	I0914 10:28:14.623852    4303 start.go:93] Provisioning new machine with config: &{Name:offline-docker-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:14.624100    4303 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:14.631855    4303 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:14.678803    4303 start.go:159] libmachine.API.Create for "offline-docker-216000" (driver="qemu2")
	I0914 10:28:14.678852    4303 client.go:168] LocalClient.Create starting
	I0914 10:28:14.678978    4303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:14.679041    4303 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:14.679057    4303 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:14.679126    4303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:14.679165    4303 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:14.679180    4303 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:14.679646    4303 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:14.863915    4303 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:14.892597    4303 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:14.892605    4303 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:14.892790    4303 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:14.901767    4303 main.go:141] libmachine: STDOUT: 
	I0914 10:28:14.901786    4303 main.go:141] libmachine: STDERR: 
	I0914 10:28:14.901836    4303 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2 +20000M
	I0914 10:28:14.910197    4303 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:14.910213    4303 main.go:141] libmachine: STDERR: 
	I0914 10:28:14.910225    4303 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:14.910230    4303 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:14.910237    4303 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:14.910280    4303 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ed:ff:0e:90:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/offline-docker-216000/disk.qcow2
	I0914 10:28:14.911945    4303 main.go:141] libmachine: STDOUT: 
	I0914 10:28:14.911957    4303 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:14.911969    4303 client.go:171] duration metric: took 233.122125ms to LocalClient.Create
	I0914 10:28:16.914063    4303 start.go:128] duration metric: took 2.290029084s to createHost
	I0914 10:28:16.914126    4303 start.go:83] releasing machines lock for "offline-docker-216000", held for 2.290447917s
	W0914 10:28:16.914439    4303 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-216000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-216000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:16.929932    4303 out.go:201] 
	W0914 10:28:16.933951    4303 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:28:16.933982    4303 out.go:270] * 
	* 
	W0914 10:28:16.936821    4303 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:28:16.949877    4303 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-216000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-14 10:28:16.966186 -0700 PDT m=+2723.261149417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-216000 -n offline-docker-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-216000 -n offline-docker-216000: exit status 7 (69.127666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-216000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-216000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-216000
--- FAIL: TestOffline (9.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.2285ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-plmzv" [4a35ee5a-620a-469e-a679-2174aad28170] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008457708s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nkvcw" [321fe437-23fd-4025-9847-6fd69811ce7a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007281041s
addons_test.go:342: (dbg) Run:  kubectl --context addons-528000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-528000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-528000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074075916s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-528000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 ip
2024/09/14 09:56:29 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-528000 -n addons-528000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:42 PDT |                     |
	|         | -p download-only-612000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| delete  | -p download-only-612000                                                                     | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| start   | -o=json --download-only                                                                     | download-only-039000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT |                     |
	|         | -p download-only-039000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| delete  | -p download-only-039000                                                                     | download-only-039000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| delete  | -p download-only-612000                                                                     | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| delete  | -p download-only-039000                                                                     | download-only-039000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-023000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT |                     |
	|         | binary-mirror-023000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-023000                                                                     | binary-mirror-023000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| addons  | disable dashboard -p                                                                        | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT |                     |
	|         | addons-528000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT |                     |
	|         | addons-528000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-528000 --wait=true                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:46 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-528000 addons disable                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:47 PDT | 14 Sep 24 09:47 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-528000 addons                                                                        | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:55 PDT | 14 Sep 24 09:55 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-528000 addons                                                                        | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:55 PDT | 14 Sep 24 09:55 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-528000 addons disable                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:55 PDT | 14 Sep 24 09:56 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:56 PDT | 14 Sep 24 09:56 PDT |
	|         | -p addons-528000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-528000 ssh cat                                                                       | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:56 PDT | 14 Sep 24 09:56 PDT |
	|         | /opt/local-path-provisioner/pvc-40cba7ad-1232-4eba-80ce-dc029efeb173_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-528000 addons disable                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:56 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-528000 ip                                                                            | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:56 PDT | 14 Sep 24 09:56 PDT |
	| addons  | addons-528000 addons disable                                                                | addons-528000        | jenkins | v1.34.0 | 14 Sep 24 09:56 PDT | 14 Sep 24 09:56 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 09:43:15
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 09:43:15.307053    1679 out.go:345] Setting OutFile to fd 1 ...
	I0914 09:43:15.307176    1679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:15.307180    1679 out.go:358] Setting ErrFile to fd 2...
	I0914 09:43:15.307182    1679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:15.307310    1679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 09:43:15.308356    1679 out.go:352] Setting JSON to false
	I0914 09:43:15.324661    1679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":758,"bootTime":1726331437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 09:43:15.324723    1679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 09:43:15.329139    1679 out.go:177] * [addons-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 09:43:15.335135    1679 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 09:43:15.335193    1679 notify.go:220] Checking for updates...
	I0914 09:43:15.342097    1679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:43:15.345060    1679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 09:43:15.348076    1679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 09:43:15.351077    1679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 09:43:15.352339    1679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 09:43:15.355271    1679 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 09:43:15.359086    1679 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 09:43:15.364064    1679 start.go:297] selected driver: qemu2
	I0914 09:43:15.364071    1679 start.go:901] validating driver "qemu2" against <nil>
	I0914 09:43:15.364078    1679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 09:43:15.366271    1679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 09:43:15.369038    1679 out.go:177] * Automatically selected the socket_vmnet network
	I0914 09:43:15.372169    1679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 09:43:15.372189    1679 cni.go:84] Creating CNI manager for ""
	I0914 09:43:15.372215    1679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 09:43:15.372223    1679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 09:43:15.372252    1679 start.go:340] cluster config:
	{Name:addons-528000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:43:15.375857    1679 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 09:43:15.384093    1679 out.go:177] * Starting "addons-528000" primary control-plane node in "addons-528000" cluster
	I0914 09:43:15.388056    1679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:15.388071    1679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 09:43:15.388082    1679 cache.go:56] Caching tarball of preloaded images
	I0914 09:43:15.388151    1679 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 09:43:15.388156    1679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 09:43:15.388351    1679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/config.json ...
	I0914 09:43:15.388363    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/config.json: {Name:mkf2804287f9ed86b340a0ccd1b8a7925f2e0a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:15.388749    1679 start.go:360] acquireMachinesLock for addons-528000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 09:43:15.388817    1679 start.go:364] duration metric: took 62µs to acquireMachinesLock for "addons-528000"
	I0914 09:43:15.388827    1679 start.go:93] Provisioning new machine with config: &{Name:addons-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 09:43:15.388853    1679 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 09:43:15.397074    1679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 09:43:15.632212    1679 start.go:159] libmachine.API.Create for "addons-528000" (driver="qemu2")
	I0914 09:43:15.632261    1679 client.go:168] LocalClient.Create starting
	I0914 09:43:15.632440    1679 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 09:43:15.674066    1679 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 09:43:15.755061    1679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 09:43:16.031345    1679 main.go:141] libmachine: Creating SSH key...
	I0914 09:43:16.214564    1679 main.go:141] libmachine: Creating Disk image...
	I0914 09:43:16.214576    1679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 09:43:16.214858    1679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2
	I0914 09:43:16.231970    1679 main.go:141] libmachine: STDOUT: 
	I0914 09:43:16.231996    1679 main.go:141] libmachine: STDERR: 
	I0914 09:43:16.232064    1679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2 +20000M
	I0914 09:43:16.240145    1679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 09:43:16.240165    1679 main.go:141] libmachine: STDERR: 
	I0914 09:43:16.240179    1679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2
	I0914 09:43:16.240185    1679 main.go:141] libmachine: Starting QEMU VM...
	I0914 09:43:16.240226    1679 qemu.go:418] Using hvf for hardware acceleration
	I0914 09:43:16.240259    1679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:21:ec:16:3b:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/disk.qcow2
	I0914 09:43:16.298542    1679 main.go:141] libmachine: STDOUT: 
	I0914 09:43:16.298582    1679 main.go:141] libmachine: STDERR: 
	I0914 09:43:16.298586    1679 main.go:141] libmachine: Attempt 0
	I0914 09:43:16.298602    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:16.298660    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:16.298681    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:18.300822    1679 main.go:141] libmachine: Attempt 1
	I0914 09:43:18.300898    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:18.301259    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:18.301310    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:20.303538    1679 main.go:141] libmachine: Attempt 2
	I0914 09:43:20.303659    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:20.303973    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:20.304024    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:22.306149    1679 main.go:141] libmachine: Attempt 3
	I0914 09:43:22.306195    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:22.306295    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:22.306320    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:24.308311    1679 main.go:141] libmachine: Attempt 4
	I0914 09:43:24.308324    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:24.308351    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:24.308357    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:26.310363    1679 main.go:141] libmachine: Attempt 5
	I0914 09:43:26.310378    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:26.310415    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:26.310425    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:28.312454    1679 main.go:141] libmachine: Attempt 6
	I0914 09:43:28.312479    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:28.312578    1679 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 09:43:28.312588    1679 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e70e80}
	I0914 09:43:30.313224    1679 main.go:141] libmachine: Attempt 7
	I0914 09:43:30.313251    1679 main.go:141] libmachine: Searching for e:21:ec:16:3b:13 in /var/db/dhcpd_leases ...
	I0914 09:43:30.313385    1679 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0914 09:43:30.313399    1679 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e:21:ec:16:3b:13 ID:1,e:21:ec:16:3b:13 Lease:0x66e70eb0}
	I0914 09:43:30.313405    1679 main.go:141] libmachine: Found match: e:21:ec:16:3b:13
	I0914 09:43:30.313415    1679 main.go:141] libmachine: IP: 192.168.105.2
	I0914 09:43:30.313419    1679 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 09:43:32.334006    1679 machine.go:93] provisionDockerMachine start ...
	I0914 09:43:32.335519    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.335970    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.335986    1679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 09:43:32.413929    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 09:43:32.413958    1679 buildroot.go:166] provisioning hostname "addons-528000"
	I0914 09:43:32.414112    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.414358    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.414370    1679 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-528000 && echo "addons-528000" | sudo tee /etc/hostname
	I0914 09:43:32.487638    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-528000
	
	I0914 09:43:32.487749    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.487939    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.487953    1679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-528000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-528000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-528000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 09:43:32.550418    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 09:43:32.550430    1679 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19643-1079/.minikube CaCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19643-1079/.minikube}
	I0914 09:43:32.550445    1679 buildroot.go:174] setting up certificates
	I0914 09:43:32.550453    1679 provision.go:84] configureAuth start
	I0914 09:43:32.550462    1679 provision.go:143] copyHostCerts
	I0914 09:43:32.550579    1679 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem (1078 bytes)
	I0914 09:43:32.550841    1679 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem (1123 bytes)
	I0914 09:43:32.550982    1679 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem (1675 bytes)
	I0914 09:43:32.551088    1679 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem org=jenkins.addons-528000 san=[127.0.0.1 192.168.105.2 addons-528000 localhost minikube]
	I0914 09:43:32.766277    1679 provision.go:177] copyRemoteCerts
	I0914 09:43:32.766351    1679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 09:43:32.766373    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:32.797969    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 09:43:32.806399    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 09:43:32.814665    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 09:43:32.822981    1679 provision.go:87] duration metric: took 272.5245ms to configureAuth
	I0914 09:43:32.822990    1679 buildroot.go:189] setting minikube options for container-runtime
	I0914 09:43:32.823094    1679 config.go:182] Loaded profile config "addons-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 09:43:32.823133    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.823224    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.823229    1679 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 09:43:32.878817    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 09:43:32.878827    1679 buildroot.go:70] root file system type: tmpfs
	I0914 09:43:32.878885    1679 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 09:43:32.878931    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.879027    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.879060    1679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 09:43:32.936960    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 09:43:32.937013    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:32.937120    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:32.937131    1679 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 09:43:34.327397    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 09:43:34.327409    1679 machine.go:96] duration metric: took 1.993426083s to provisionDockerMachine
	I0914 09:43:34.327415    1679 client.go:171] duration metric: took 18.695656917s to LocalClient.Create
	I0914 09:43:34.327427    1679 start.go:167] duration metric: took 18.695727833s to libmachine.API.Create "addons-528000"
	I0914 09:43:34.327431    1679 start.go:293] postStartSetup for "addons-528000" (driver="qemu2")
	I0914 09:43:34.327437    1679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 09:43:34.327523    1679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 09:43:34.327533    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:34.358767    1679 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 09:43:34.360406    1679 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 09:43:34.360414    1679 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/addons for local assets ...
	I0914 09:43:34.360513    1679 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/files for local assets ...
	I0914 09:43:34.360543    1679 start.go:296] duration metric: took 33.110041ms for postStartSetup
	I0914 09:43:34.360967    1679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/config.json ...
	I0914 09:43:34.361137    1679 start.go:128] duration metric: took 18.972793917s to createHost
	I0914 09:43:34.361170    1679 main.go:141] libmachine: Using SSH client type: native
	I0914 09:43:34.361262    1679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e5190] 0x1028e79d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 09:43:34.361266    1679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 09:43:34.414343    1679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726332214.195084086
	
	I0914 09:43:34.414351    1679 fix.go:216] guest clock: 1726332214.195084086
	I0914 09:43:34.414355    1679 fix.go:229] Guest: 2024-09-14 09:43:34.195084086 -0700 PDT Remote: 2024-09-14 09:43:34.361139 -0700 PDT m=+19.073258918 (delta=-166.054914ms)
	I0914 09:43:34.414366    1679 fix.go:200] guest clock delta is within tolerance: -166.054914ms
	I0914 09:43:34.414368    1679 start.go:83] releasing machines lock for "addons-528000", held for 19.026062667s
	I0914 09:43:34.414670    1679 ssh_runner.go:195] Run: cat /version.json
	I0914 09:43:34.414682    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:34.414670    1679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 09:43:34.414767    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:34.488919    1679 ssh_runner.go:195] Run: systemctl --version
	I0914 09:43:34.491319    1679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 09:43:34.493356    1679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 09:43:34.493390    1679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 09:43:34.499794    1679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 09:43:34.499802    1679 start.go:495] detecting cgroup driver to use...
	I0914 09:43:34.499924    1679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 09:43:34.506100    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 09:43:34.510050    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 09:43:34.513966    1679 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 09:43:34.514005    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 09:43:34.517889    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 09:43:34.521885    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 09:43:34.525723    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 09:43:34.529534    1679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 09:43:34.533397    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 09:43:34.537347    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 09:43:34.541368    1679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 09:43:34.545144    1679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 09:43:34.548598    1679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 09:43:34.551917    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:34.631742    1679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 09:43:34.639862    1679 start.go:495] detecting cgroup driver to use...
	I0914 09:43:34.639940    1679 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 09:43:34.646036    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 09:43:34.652417    1679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 09:43:34.658927    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 09:43:34.664152    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 09:43:34.669400    1679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 09:43:34.709623    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 09:43:34.716083    1679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 09:43:34.722497    1679 ssh_runner.go:195] Run: which cri-dockerd
	I0914 09:43:34.723959    1679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 09:43:34.727135    1679 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0914 09:43:34.733103    1679 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 09:43:34.815838    1679 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 09:43:34.894874    1679 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 09:43:34.894926    1679 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 09:43:34.901001    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:34.983143    1679 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 09:43:37.181134    1679 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.198031041s)
	I0914 09:43:37.181203    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 09:43:37.186672    1679 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0914 09:43:37.193675    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 09:43:37.199109    1679 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 09:43:37.284891    1679 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 09:43:37.371186    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:37.451202    1679 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 09:43:37.457998    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 09:43:37.463537    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:37.546373    1679 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 09:43:37.570737    1679 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 09:43:37.570849    1679 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 09:43:37.573984    1679 start.go:563] Will wait 60s for crictl version
	I0914 09:43:37.574039    1679 ssh_runner.go:195] Run: which crictl
	I0914 09:43:37.575505    1679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 09:43:37.594284    1679 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0914 09:43:37.594364    1679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 09:43:37.606347    1679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 09:43:37.621978    1679 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0914 09:43:37.622122    1679 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 09:43:37.623824    1679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 09:43:37.627996    1679 kubeadm.go:883] updating cluster {Name:addons-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 09:43:37.628082    1679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:37.628136    1679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 09:43:37.633230    1679 docker.go:685] Got preloaded images: 
	I0914 09:43:37.633238    1679 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0914 09:43:37.633278    1679 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 09:43:37.637087    1679 ssh_runner.go:195] Run: which lz4
	I0914 09:43:37.638397    1679 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 09:43:37.639800    1679 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 09:43:37.639810    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0914 09:43:38.905743    1679 docker.go:649] duration metric: took 1.267418167s to copy over tarball
	I0914 09:43:38.905837    1679 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 09:43:39.852130    1679 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 09:43:39.866736    1679 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 09:43:39.870380    1679 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0914 09:43:39.876369    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:39.966979    1679 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 09:43:42.166145    1679 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.1992085s)
	I0914 09:43:42.166259    1679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 09:43:42.172590    1679 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 09:43:42.172601    1679 cache_images.go:84] Images are preloaded, skipping loading
	I0914 09:43:42.172629    1679 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0914 09:43:42.172693    1679 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-528000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 09:43:42.172779    1679 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 09:43:42.195007    1679 cni.go:84] Creating CNI manager for ""
	I0914 09:43:42.195018    1679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 09:43:42.195024    1679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 09:43:42.195034    1679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-528000 NodeName:addons-528000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 09:43:42.195113    1679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-528000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 09:43:42.195188    1679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 09:43:42.198766    1679 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 09:43:42.198804    1679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 09:43:42.202266    1679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 09:43:42.208421    1679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 09:43:42.214176    1679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 09:43:42.220216    1679 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 09:43:42.221571    1679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 09:43:42.225855    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:42.307975    1679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 09:43:42.315548    1679 certs.go:68] Setting up /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000 for IP: 192.168.105.2
	I0914 09:43:42.315558    1679 certs.go:194] generating shared ca certs ...
	I0914 09:43:42.315570    1679 certs.go:226] acquiring lock for ca certs: {Name:mk7a785a7c5445527aceab92dcaa64cad76e8086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.315778    1679 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key
	I0914 09:43:42.360463    1679 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt ...
	I0914 09:43:42.360479    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt: {Name:mk61d583c799d7ee32535dbb9e6600a3c0599edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.360791    1679 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key ...
	I0914 09:43:42.360795    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key: {Name:mk63523fcb51ba8e4113792e243744d64bc5c115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.360916    1679 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key
	I0914 09:43:42.424373    1679 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt ...
	I0914 09:43:42.424377    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt: {Name:mk530257a9f2c7492e93894324447740741625f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.424519    1679 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key ...
	I0914 09:43:42.424523    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key: {Name:mkb557e114d9f78f4c8ad8bb45eb0be950da1954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.424650    1679 certs.go:256] generating profile certs ...
	I0914 09:43:42.424681    1679 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.key
	I0914 09:43:42.424688    1679 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt with IP's: []
	I0914 09:43:42.533036    1679 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt ...
	I0914 09:43:42.533039    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: {Name:mk7b63edc8c560b9dcfdb9327c40db0b75cb67d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.533173    1679 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.key ...
	I0914 09:43:42.533176    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.key: {Name:mkc7019c8c16c35460df9fcb1b848957f9704073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.533280    1679 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key.2bf1688d
	I0914 09:43:42.533289    1679 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt.2bf1688d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0914 09:43:42.825434    1679 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt.2bf1688d ...
	I0914 09:43:42.825444    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt.2bf1688d: {Name:mka716373005702e4f970df9d474121871aff270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.825661    1679 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key.2bf1688d ...
	I0914 09:43:42.825665    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key.2bf1688d: {Name:mk8aa2c1a9e50a8861c8e8b6171c394262cb6abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.825773    1679 certs.go:381] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt.2bf1688d -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt
	I0914 09:43:42.826121    1679 certs.go:385] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key.2bf1688d -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key
	I0914 09:43:42.826235    1679 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.key
	I0914 09:43:42.826247    1679 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.crt with IP's: []
	I0914 09:43:42.986583    1679 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.crt ...
	I0914 09:43:42.986591    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.crt: {Name:mk7c7481da37a2bf3f3623238873559478e67863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.986806    1679 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.key ...
	I0914 09:43:42.986809    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.key: {Name:mk43c2b6a9f7d1a0ef3fa273d4f36c3d9cb8be1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:42.987067    1679 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 09:43:42.987092    1679 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem (1078 bytes)
	I0914 09:43:42.987112    1679 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem (1123 bytes)
	I0914 09:43:42.987129    1679 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem (1675 bytes)
	I0914 09:43:42.987514    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 09:43:42.997841    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 09:43:43.006569    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 09:43:43.015147    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 09:43:43.024540    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 09:43:43.032480    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 09:43:43.040635    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 09:43:43.048640    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 09:43:43.056906    1679 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 09:43:43.065339    1679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 09:43:43.072131    1679 ssh_runner.go:195] Run: openssl version
	I0914 09:43:43.074466    1679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 09:43:43.078445    1679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 09:43:43.079935    1679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0914 09:43:43.079960    1679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 09:43:43.081917    1679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 09:43:43.085761    1679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 09:43:43.087196    1679 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 09:43:43.087239    1679 kubeadm.go:392] StartCluster: {Name:addons-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:43:43.087316    1679 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 09:43:43.092422    1679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 09:43:43.095957    1679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 09:43:43.099146    1679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 09:43:43.102423    1679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 09:43:43.102429    1679 kubeadm.go:157] found existing configuration files:
	
	I0914 09:43:43.102456    1679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 09:43:43.105790    1679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 09:43:43.105819    1679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 09:43:43.109366    1679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 09:43:43.113014    1679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 09:43:43.113047    1679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 09:43:43.116581    1679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 09:43:43.119948    1679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 09:43:43.119975    1679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 09:43:43.123084    1679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 09:43:43.126244    1679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 09:43:43.126270    1679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 09:43:43.129794    1679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 09:43:43.151175    1679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 09:43:43.151239    1679 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 09:43:43.188111    1679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 09:43:43.188164    1679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 09:43:43.188213    1679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 09:43:43.192730    1679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 09:43:43.208938    1679 out.go:235]   - Generating certificates and keys ...
	I0914 09:43:43.208971    1679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 09:43:43.209000    1679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 09:43:43.506434    1679 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 09:43:43.772043    1679 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 09:43:43.877638    1679 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 09:43:44.170829    1679 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 09:43:44.255302    1679 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 09:43:44.255375    1679 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-528000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 09:43:44.427590    1679 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 09:43:44.427660    1679 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-528000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 09:43:44.528619    1679 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 09:43:44.670203    1679 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 09:43:44.745152    1679 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 09:43:44.745185    1679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 09:43:44.830326    1679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 09:43:44.915590    1679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 09:43:45.001094    1679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 09:43:45.153187    1679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 09:43:45.287621    1679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 09:43:45.287802    1679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 09:43:45.289017    1679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 09:43:45.292294    1679 out.go:235]   - Booting up control plane ...
	I0914 09:43:45.292348    1679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 09:43:45.292384    1679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 09:43:45.292423    1679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 09:43:45.296318    1679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 09:43:45.298634    1679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 09:43:45.298663    1679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 09:43:45.390939    1679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 09:43:45.391001    1679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 09:43:45.901226    1679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.529209ms
	I0914 09:43:45.901476    1679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 09:43:48.903759    1679 kubeadm.go:310] [api-check] The API server is healthy after 3.002710626s
	I0914 09:43:48.922842    1679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 09:43:48.934113    1679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 09:43:48.954440    1679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 09:43:48.954597    1679 kubeadm.go:310] [mark-control-plane] Marking the node addons-528000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 09:43:48.959075    1679 kubeadm.go:310] [bootstrap-token] Using token: oecuqr.wgq6pz8vxz0m4yhg
	I0914 09:43:48.971857    1679 out.go:235]   - Configuring RBAC rules ...
	I0914 09:43:48.971946    1679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 09:43:48.972001    1679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 09:43:48.973782    1679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 09:43:48.974911    1679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 09:43:48.976083    1679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 09:43:48.977154    1679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 09:43:49.317765    1679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 09:43:49.720668    1679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 09:43:50.324495    1679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 09:43:50.325877    1679 kubeadm.go:310] 
	I0914 09:43:50.325945    1679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 09:43:50.325959    1679 kubeadm.go:310] 
	I0914 09:43:50.326071    1679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 09:43:50.326082    1679 kubeadm.go:310] 
	I0914 09:43:50.326111    1679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 09:43:50.326173    1679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 09:43:50.326271    1679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 09:43:50.326280    1679 kubeadm.go:310] 
	I0914 09:43:50.326338    1679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 09:43:50.326345    1679 kubeadm.go:310] 
	I0914 09:43:50.326419    1679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 09:43:50.326426    1679 kubeadm.go:310] 
	I0914 09:43:50.326487    1679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 09:43:50.326569    1679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 09:43:50.326645    1679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 09:43:50.326653    1679 kubeadm.go:310] 
	I0914 09:43:50.326747    1679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 09:43:50.326828    1679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 09:43:50.326837    1679 kubeadm.go:310] 
	I0914 09:43:50.326939    1679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oecuqr.wgq6pz8vxz0m4yhg \
	I0914 09:43:50.327072    1679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 \
	I0914 09:43:50.327098    1679 kubeadm.go:310] 	--control-plane 
	I0914 09:43:50.327104    1679 kubeadm.go:310] 
	I0914 09:43:50.327218    1679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 09:43:50.327226    1679 kubeadm.go:310] 
	I0914 09:43:50.327327    1679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oecuqr.wgq6pz8vxz0m4yhg \
	I0914 09:43:50.327460    1679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 
	I0914 09:43:50.327847    1679 kubeadm.go:310] W0914 16:43:42.931029    1603 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 09:43:50.328156    1679 kubeadm.go:310] W0914 16:43:42.931344    1603 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 09:43:50.328273    1679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 09:43:50.328288    1679 cni.go:84] Creating CNI manager for ""
	I0914 09:43:50.328301    1679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 09:43:50.332432    1679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 09:43:50.336515    1679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 09:43:50.342232    1679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 09:43:50.350215    1679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 09:43:50.350304    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:50.350345    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-528000 minikube.k8s.io/updated_at=2024_09_14T09_43_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-528000 minikube.k8s.io/primary=true
	I0914 09:43:50.410074    1679 ops.go:34] apiserver oom_adj: -16
	I0914 09:43:50.410176    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:50.912277    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:51.412212    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:51.912236    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:52.412209    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:52.912188    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:53.412222    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:53.912236    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:54.410342    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:54.912113    1679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 09:43:54.945020    1679 kubeadm.go:1113] duration metric: took 4.594887666s to wait for elevateKubeSystemPrivileges
	I0914 09:43:54.945034    1679 kubeadm.go:394] duration metric: took 11.85811775s to StartCluster
	I0914 09:43:54.945046    1679 settings.go:142] acquiring lock: {Name:mk7db576f28fda26cf1d7d854618889d7d4f8a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:54.945224    1679 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:43:54.945406    1679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:54.945646    1679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 09:43:54.945678    1679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 09:43:54.945763    1679 config.go:182] Loaded profile config "addons-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 09:43:54.945756    1679 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 09:43:54.945801    1679 addons.go:69] Setting yakd=true in profile "addons-528000"
	I0914 09:43:54.945802    1679 addons.go:69] Setting default-storageclass=true in profile "addons-528000"
	I0914 09:43:54.945808    1679 addons.go:234] Setting addon yakd=true in "addons-528000"
	I0914 09:43:54.945810    1679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-528000"
	I0914 09:43:54.945814    1679 addons.go:69] Setting cloud-spanner=true in profile "addons-528000"
	I0914 09:43:54.945820    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945821    1679 addons.go:234] Setting addon cloud-spanner=true in "addons-528000"
	I0914 09:43:54.945824    1679 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-528000"
	I0914 09:43:54.945871    1679 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-528000"
	I0914 09:43:54.945880    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945833    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945837    1679 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-528000"
	I0914 09:43:54.945988    1679 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-528000"
	I0914 09:43:54.946000    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945840    1679 addons.go:69] Setting gcp-auth=true in profile "addons-528000"
	I0914 09:43:54.946042    1679 mustload.go:65] Loading cluster: addons-528000
	I0914 09:43:54.945843    1679 addons.go:69] Setting ingress=true in profile "addons-528000"
	I0914 09:43:54.946108    1679 addons.go:234] Setting addon ingress=true in "addons-528000"
	I0914 09:43:54.946117    1679 config.go:182] Loaded profile config "addons-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 09:43:54.946120    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945845    1679 addons.go:69] Setting ingress-dns=true in profile "addons-528000"
	I0914 09:43:54.946174    1679 addons.go:234] Setting addon ingress-dns=true in "addons-528000"
	I0914 09:43:54.946172    1679 retry.go:31] will retry after 1.300452164s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946182    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.946365    1679 retry.go:31] will retry after 1.221802924s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946378    1679 retry.go:31] will retry after 1.138633612s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.945844    1679 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-528000"
	I0914 09:43:54.946404    1679 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-528000"
	I0914 09:43:54.945848    1679 addons.go:69] Setting inspektor-gadget=true in profile "addons-528000"
	I0914 09:43:54.946417    1679 addons.go:234] Setting addon inspektor-gadget=true in "addons-528000"
	I0914 09:43:54.946426    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945850    1679 addons.go:69] Setting registry=true in profile "addons-528000"
	I0914 09:43:54.946467    1679 addons.go:234] Setting addon registry=true in "addons-528000"
	I0914 09:43:54.946491    1679 retry.go:31] will retry after 516.664881ms: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946504    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945851    1679 addons.go:69] Setting metrics-server=true in profile "addons-528000"
	I0914 09:43:54.946533    1679 addons.go:234] Setting addon metrics-server=true in "addons-528000"
	I0914 09:43:54.946539    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.945853    1679 addons.go:69] Setting storage-provisioner=true in profile "addons-528000"
	I0914 09:43:54.946548    1679 addons.go:234] Setting addon storage-provisioner=true in "addons-528000"
	I0914 09:43:54.946558    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.946538    1679 retry.go:31] will retry after 1.175232876s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.945856    1679 addons.go:69] Setting volumesnapshots=true in profile "addons-528000"
	I0914 09:43:54.946613    1679 retry.go:31] will retry after 1.292295359s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946604    1679 addons.go:234] Setting addon volumesnapshots=true in "addons-528000"
	I0914 09:43:54.946624    1679 retry.go:31] will retry after 1.045903383s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946631    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.946398    1679 retry.go:31] will retry after 857.956508ms: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.945854    1679 addons.go:69] Setting volcano=true in profile "addons-528000"
	I0914 09:43:54.946670    1679 addons.go:234] Setting addon volcano=true in "addons-528000"
	I0914 09:43:54.946740    1679 retry.go:31] will retry after 993.254323ms: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946767    1679 retry.go:31] will retry after 1.222769893s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.946782    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.946832    1679 retry.go:31] will retry after 1.127848589s: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.947053    1679 retry.go:31] will retry after 841.290553ms: connect: dial unix /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/monitor: connect: connection refused
	I0914 09:43:54.948197    1679 addons.go:234] Setting addon default-storageclass=true in "addons-528000"
	I0914 09:43:54.950229    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:54.949962    1679 out.go:177] * Verifying Kubernetes components...
	I0914 09:43:54.950775    1679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 09:43:54.953211    1679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 09:43:54.953219    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:54.956855    1679 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 09:43:54.956861    1679 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 09:43:54.960965    1679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 09:43:54.966869    1679 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 09:43:54.966878    1679 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 09:43:54.966887    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:54.973903    1679 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 09:43:54.977910    1679 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 09:43:54.977917    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 09:43:54.977929    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:54.996216    1679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 09:43:55.069427    1679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 09:43:55.094046    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 09:43:55.167840    1679 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 09:43:55.167852    1679 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 09:43:55.173254    1679 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 09:43:55.173267    1679 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 09:43:55.177711    1679 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 09:43:55.177717    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 09:43:55.183810    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 09:43:55.192436    1679 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 09:43:55.192449    1679 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 09:43:55.200891    1679 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 09:43:55.200905    1679 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 09:43:55.208172    1679 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 09:43:55.208181    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 09:43:55.229634    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 09:43:55.260384    1679 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 09:43:55.260879    1679 node_ready.go:35] waiting up to 6m0s for node "addons-528000" to be "Ready" ...
	I0914 09:43:55.271168    1679 node_ready.go:49] node "addons-528000" has status "Ready":"True"
	I0914 09:43:55.271188    1679 node_ready.go:38] duration metric: took 10.284708ms for node "addons-528000" to be "Ready" ...
	I0914 09:43:55.271193    1679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 09:43:55.280466    1679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace to be "Ready" ...
	I0914 09:43:55.468851    1679 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 09:43:55.474950    1679 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 09:43:55.474962    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 09:43:55.474973    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:55.480310    1679 addons.go:475] Verifying addon registry=true in "addons-528000"
	I0914 09:43:55.482757    1679 out.go:177] * Verifying registry addon...
	I0914 09:43:55.490282    1679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 09:43:55.492459    1679 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 09:43:55.492467    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:55.511448    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 09:43:55.535820    1679 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-528000 service yakd-dashboard -n yakd-dashboard
	
	I0914 09:43:55.763183    1679 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-528000" context rescaled to 1 replicas
	I0914 09:43:55.793805    1679 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0914 09:43:55.801768    1679 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0914 09:43:55.807778    1679 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0914 09:43:55.808331    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:55.811184    1679 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 09:43:55.811193    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0914 09:43:55.811203    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:55.853140    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 09:43:55.944816    1679 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 09:43:55.948805    1679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 09:43:55.948818    1679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 09:43:55.948845    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:55.993545    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:55.996795    1679 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 09:43:55.999795    1679 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 09:43:55.999803    1679 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 09:43:55.999814    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.023060    1679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 09:43:56.023071    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 09:43:56.071905    1679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 09:43:56.071920    1679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 09:43:56.078777    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 09:43:56.081769    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 09:43:56.081781    1679 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 09:43:56.081791    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.088155    1679 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 09:43:56.091829    1679 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 09:43:56.091836    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 09:43:56.091845    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.092134    1679 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 09:43:56.092141    1679 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 09:43:56.125820    1679 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 09:43:56.129713    1679 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 09:43:56.129722    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 09:43:56.129732    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.133871    1679 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 09:43:56.133882    1679 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 09:43:56.147102    1679 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 09:43:56.147114    1679 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 09:43:56.162973    1679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 09:43:56.162987    1679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 09:43:56.173809    1679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 09:43:56.180820    1679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 09:43:56.186869    1679 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 09:43:56.186882    1679 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 09:43:56.190837    1679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 09:43:56.193202    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 09:43:56.194759    1679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 09:43:56.198876    1679 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 09:43:56.198884    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 09:43:56.198893    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.202776    1679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 09:43:56.202784    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 09:43:56.202792    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.204483    1679 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 09:43:56.204491    1679 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 09:43:56.214248    1679 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 09:43:56.214258    1679 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 09:43:56.217467    1679 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 09:43:56.217475    1679 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 09:43:56.232541    1679 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 09:43:56.232554    1679 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 09:43:56.240177    1679 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 09:43:56.240188    1679 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 09:43:56.241100    1679 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-528000"
	I0914 09:43:56.241119    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:43:56.244721    1679 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 09:43:56.248799    1679 out.go:177]   - Using image docker.io/busybox:stable
	I0914 09:43:56.252765    1679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 09:43:56.252772    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 09:43:56.252782    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.253070    1679 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 09:43:56.253077    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 09:43:56.253101    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 09:43:56.253101    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 09:43:56.253232    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 09:43:56.253258    1679 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 09:43:56.257748    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 09:43:56.261799    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 09:43:56.266335    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 09:43:56.271747    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 09:43:56.278507    1679 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 09:43:56.278516    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 09:43:56.281614    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 09:43:56.293737    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 09:43:56.301792    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 09:43:56.305812    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 09:43:56.308763    1679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 09:43:56.312806    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 09:43:56.312820    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 09:43:56.312833    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:43:56.327469    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 09:43:56.328329    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 09:43:56.332575    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 09:43:56.335068    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 09:43:56.494357    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:56.542003    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 09:43:56.542016    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 09:43:56.631935    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 09:43:56.631954    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 09:43:56.742028    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 09:43:56.742040    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 09:43:56.854663    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 09:43:56.854676    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 09:43:56.931554    1679 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 09:43:56.931569    1679 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 09:43:57.002843    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 09:43:57.002853    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 09:43:57.008142    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:57.038238    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 09:43:57.038250    1679 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 09:43:57.061990    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 09:43:57.062000    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 09:43:57.155321    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 09:43:57.155333    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 09:43:57.302857    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:43:57.305196    1679 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 09:43:57.305206    1679 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 09:43:57.458774    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 09:43:57.494092    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:57.994841    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:58.556217    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:59.007110    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:59.219563    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.026425875s)
	I0914 09:43:59.219585    1679 addons.go:475] Verifying addon metrics-server=true in "addons-528000"
	I0914 09:43:59.219589    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.366529042s)
	I0914 09:43:59.219596    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.966514208s)
	I0914 09:43:59.219612    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.966586s)
	I0914 09:43:59.219642    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.953376916s)
	I0914 09:43:59.219671    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.89227025s)
	I0914 09:43:59.219775    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.891512917s)
	W0914 09:43:59.219789    1679 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 09:43:59.219805    1679 retry.go:31] will retry after 279.545786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 09:43:59.295246    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.960244333s)
	I0914 09:43:59.295406    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.962903292s)
	I0914 09:43:59.295415    1679 addons.go:475] Verifying addon ingress=true in "addons-528000"
	I0914 09:43:59.299458    1679 out.go:177] * Verifying ingress addon...
	I0914 09:43:59.306744    1679 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 09:43:59.306790    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:43:59.313002    1679 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 09:43:59.498154    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:43:59.499464    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 09:43:59.623283    1679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.164539041s)
	I0914 09:43:59.623306    1679 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-528000"
	I0914 09:43:59.626419    1679 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 09:43:59.633787    1679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 09:43:59.637543    1679 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 09:43:59.637552    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:43:59.994354    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:00.138829    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:00.494129    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:00.638023    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:00.994273    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:01.138316    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:01.492435    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:01.638991    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:01.784755    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:01.996450    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:02.142124    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:02.496183    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:02.638746    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:02.994275    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:03.138164    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:03.495048    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:03.614441    1679 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 09:44:03.614456    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:44:03.638377    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:03.648141    1679 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 09:44:03.655862    1679 addons.go:234] Setting addon gcp-auth=true in "addons-528000"
	I0914 09:44:03.655884    1679 host.go:66] Checking if "addons-528000" exists ...
	I0914 09:44:03.656641    1679 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 09:44:03.656650    1679 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/addons-528000/id_rsa Username:docker}
	I0914 09:44:03.688930    1679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 09:44:03.693843    1679 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 09:44:03.699849    1679 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 09:44:03.699856    1679 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 09:44:03.706788    1679 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 09:44:03.706797    1679 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 09:44:03.715987    1679 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 09:44:03.715994    1679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 09:44:03.727480    1679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 09:44:03.785366    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:03.993895    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:04.081438    1679 addons.go:475] Verifying addon gcp-auth=true in "addons-528000"
	I0914 09:44:04.085638    1679 out.go:177] * Verifying gcp-auth addon...
	I0914 09:44:04.093015    1679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 09:44:04.094438    1679 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 09:44:04.196156    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:04.494101    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:04.637665    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:04.993993    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:05.137906    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:05.494088    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:05.637886    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:05.993938    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:06.136976    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:06.284398    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:06.493905    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:06.636518    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:06.993401    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:07.139077    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:07.494083    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:07.637631    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:07.993862    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:08.136506    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:08.285303    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:08.492042    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:08.636560    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:08.993515    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:09.137775    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:09.494004    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:09.637613    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:09.994085    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:10.138054    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:10.493874    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:10.637766    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:10.785140    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:10.993839    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 09:44:11.137568    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:11.493978    1679 kapi.go:107] duration metric: took 16.004139709s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 09:44:11.638066    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:12.136102    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:12.637186    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:13.135898    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:13.284659    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:13.637705    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:14.196045    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:14.637887    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:15.137619    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:15.284699    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:15.638008    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:16.136289    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:16.637462    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:17.137692    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:17.640572    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:17.784378    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:18.135602    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:18.638064    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:19.137412    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:19.636470    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:19.784348    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:20.137164    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:20.637792    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:21.137545    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:21.637016    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:21.784551    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:22.136600    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:22.637064    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:23.138579    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:23.637615    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:24.137171    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:24.284402    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:24.637130    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:25.137419    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:25.637445    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:26.137403    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:26.635647    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:26.784255    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:27.137072    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:27.637739    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:28.135995    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:28.635940    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:29.137315    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:29.284492    1679 pod_ready.go:103] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"False"
	I0914 09:44:29.636958    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:30.198391    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:30.639701    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:31.137799    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:31.284584    1679 pod_ready.go:93] pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.284593    1679 pod_ready.go:82] duration metric: took 36.005085667s for pod "coredns-7c65d6cfc9-csgbk" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.284598    1679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p4l7h" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.285577    1679 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-p4l7h" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-p4l7h" not found
	I0914 09:44:31.285586    1679 pod_ready.go:82] duration metric: took 984.917µs for pod "coredns-7c65d6cfc9-p4l7h" in "kube-system" namespace to be "Ready" ...
	E0914 09:44:31.285590    1679 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-p4l7h" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-p4l7h" not found
	I0914 09:44:31.285593    1679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.287763    1679 pod_ready.go:93] pod "etcd-addons-528000" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.287769    1679 pod_ready.go:82] duration metric: took 2.172625ms for pod "etcd-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.287773    1679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.289986    1679 pod_ready.go:93] pod "kube-apiserver-addons-528000" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.289994    1679 pod_ready.go:82] duration metric: took 2.218667ms for pod "kube-apiserver-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.289998    1679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.291893    1679 pod_ready.go:93] pod "kube-controller-manager-addons-528000" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.291898    1679 pod_ready.go:82] duration metric: took 1.896583ms for pod "kube-controller-manager-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.291902    1679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4hs9z" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.485409    1679 pod_ready.go:93] pod "kube-proxy-4hs9z" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.485419    1679 pod_ready.go:82] duration metric: took 193.518875ms for pod "kube-proxy-4hs9z" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.485424    1679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.637103    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:31.885196    1679 pod_ready.go:93] pod "kube-scheduler-addons-528000" in "kube-system" namespace has status "Ready":"True"
	I0914 09:44:31.885205    1679 pod_ready.go:82] duration metric: took 399.787459ms for pod "kube-scheduler-addons-528000" in "kube-system" namespace to be "Ready" ...
	I0914 09:44:31.885208    1679 pod_ready.go:39] duration metric: took 36.614999625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 09:44:31.885218    1679 api_server.go:52] waiting for apiserver process to appear ...
	I0914 09:44:31.885300    1679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 09:44:31.892356    1679 api_server.go:72] duration metric: took 36.947668584s to wait for apiserver process to appear ...
	I0914 09:44:31.892364    1679 api_server.go:88] waiting for apiserver healthz status ...
	I0914 09:44:31.892372    1679 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 09:44:31.895700    1679 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 09:44:31.896217    1679 api_server.go:141] control plane version: v1.31.1
	I0914 09:44:31.896224    1679 api_server.go:131] duration metric: took 3.856667ms to wait for apiserver health ...
	I0914 09:44:31.896227    1679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 09:44:32.088396    1679 system_pods.go:59] 17 kube-system pods found
	I0914 09:44:32.088409    1679 system_pods.go:61] "coredns-7c65d6cfc9-csgbk" [73d04a02-9852-4096-b55f-6f235f19c797] Running
	I0914 09:44:32.088413    1679 system_pods.go:61] "csi-hostpath-attacher-0" [b3900161-d0cb-49d1-af34-737cadc2117e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 09:44:32.088415    1679 system_pods.go:61] "csi-hostpath-resizer-0" [9af0fde1-de22-4977-92d4-bb6a76671a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 09:44:32.088431    1679 system_pods.go:61] "csi-hostpathplugin-fjrqm" [f921b82a-f1f4-43ba-9e36-986debb3da71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 09:44:32.088435    1679 system_pods.go:61] "etcd-addons-528000" [c31172a2-14b3-4f23-8568-a3ee879450b3] Running
	I0914 09:44:32.088438    1679 system_pods.go:61] "kube-apiserver-addons-528000" [da90d108-ba06-4264-b3f4-74e238786021] Running
	I0914 09:44:32.088441    1679 system_pods.go:61] "kube-controller-manager-addons-528000" [8587983e-c6e9-489c-afe7-00fbc31af1e7] Running
	I0914 09:44:32.088446    1679 system_pods.go:61] "kube-ingress-dns-minikube" [47f65b99-06fc-41a3-9011-b0fed690cb58] Running
	I0914 09:44:32.088447    1679 system_pods.go:61] "kube-proxy-4hs9z" [4e1a90d8-2ae4-44d7-932d-22a84d96aedb] Running
	I0914 09:44:32.088450    1679 system_pods.go:61] "kube-scheduler-addons-528000" [9c6e2831-672c-46a5-aa76-124985810f7c] Running
	I0914 09:44:32.088452    1679 system_pods.go:61] "metrics-server-84c5f94fbc-x9frd" [c949acd4-0638-409f-9b76-862d5121cb75] Running
	I0914 09:44:32.088454    1679 system_pods.go:61] "nvidia-device-plugin-daemonset-hq9wm" [0ce5bcbf-d7fd-485b-8a99-7ada928efe6a] Running
	I0914 09:44:32.088455    1679 system_pods.go:61] "registry-66c9cd494c-plmzv" [4a35ee5a-620a-469e-a679-2174aad28170] Running
	I0914 09:44:32.088457    1679 system_pods.go:61] "registry-proxy-nkvcw" [321fe437-23fd-4025-9847-6fd69811ce7a] Running
	I0914 09:44:32.088459    1679 system_pods.go:61] "snapshot-controller-56fcc65765-2hkl8" [807afde5-3740-4f0f-a064-41f46c464796] Running
	I0914 09:44:32.088462    1679 system_pods.go:61] "snapshot-controller-56fcc65765-7s2tw" [1465bcf5-8135-4158-a819-987ceefaa3db] Running
	I0914 09:44:32.088464    1679 system_pods.go:61] "storage-provisioner" [cd25ed00-0fb6-4f36-b9fe-ac9a9f3a4be1] Running
	I0914 09:44:32.088467    1679 system_pods.go:74] duration metric: took 192.241916ms to wait for pod list to return data ...
	I0914 09:44:32.088471    1679 default_sa.go:34] waiting for default service account to be created ...
	I0914 09:44:32.136932    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:32.285244    1679 default_sa.go:45] found service account: "default"
	I0914 09:44:32.285255    1679 default_sa.go:55] duration metric: took 196.786458ms for default service account to be created ...
	I0914 09:44:32.285260    1679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 09:44:32.487423    1679 system_pods.go:86] 17 kube-system pods found
	I0914 09:44:32.487437    1679 system_pods.go:89] "coredns-7c65d6cfc9-csgbk" [73d04a02-9852-4096-b55f-6f235f19c797] Running
	I0914 09:44:32.487446    1679 system_pods.go:89] "csi-hostpath-attacher-0" [b3900161-d0cb-49d1-af34-737cadc2117e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 09:44:32.487450    1679 system_pods.go:89] "csi-hostpath-resizer-0" [9af0fde1-de22-4977-92d4-bb6a76671a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 09:44:32.487453    1679 system_pods.go:89] "csi-hostpathplugin-fjrqm" [f921b82a-f1f4-43ba-9e36-986debb3da71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 09:44:32.487456    1679 system_pods.go:89] "etcd-addons-528000" [c31172a2-14b3-4f23-8568-a3ee879450b3] Running
	I0914 09:44:32.487459    1679 system_pods.go:89] "kube-apiserver-addons-528000" [da90d108-ba06-4264-b3f4-74e238786021] Running
	I0914 09:44:32.487461    1679 system_pods.go:89] "kube-controller-manager-addons-528000" [8587983e-c6e9-489c-afe7-00fbc31af1e7] Running
	I0914 09:44:32.487463    1679 system_pods.go:89] "kube-ingress-dns-minikube" [47f65b99-06fc-41a3-9011-b0fed690cb58] Running
	I0914 09:44:32.487465    1679 system_pods.go:89] "kube-proxy-4hs9z" [4e1a90d8-2ae4-44d7-932d-22a84d96aedb] Running
	I0914 09:44:32.487470    1679 system_pods.go:89] "kube-scheduler-addons-528000" [9c6e2831-672c-46a5-aa76-124985810f7c] Running
	I0914 09:44:32.487471    1679 system_pods.go:89] "metrics-server-84c5f94fbc-x9frd" [c949acd4-0638-409f-9b76-862d5121cb75] Running
	I0914 09:44:32.487478    1679 system_pods.go:89] "nvidia-device-plugin-daemonset-hq9wm" [0ce5bcbf-d7fd-485b-8a99-7ada928efe6a] Running
	I0914 09:44:32.487480    1679 system_pods.go:89] "registry-66c9cd494c-plmzv" [4a35ee5a-620a-469e-a679-2174aad28170] Running
	I0914 09:44:32.487482    1679 system_pods.go:89] "registry-proxy-nkvcw" [321fe437-23fd-4025-9847-6fd69811ce7a] Running
	I0914 09:44:32.487485    1679 system_pods.go:89] "snapshot-controller-56fcc65765-2hkl8" [807afde5-3740-4f0f-a064-41f46c464796] Running
	I0914 09:44:32.487487    1679 system_pods.go:89] "snapshot-controller-56fcc65765-7s2tw" [1465bcf5-8135-4158-a819-987ceefaa3db] Running
	I0914 09:44:32.487489    1679 system_pods.go:89] "storage-provisioner" [cd25ed00-0fb6-4f36-b9fe-ac9a9f3a4be1] Running
	I0914 09:44:32.487493    1679 system_pods.go:126] duration metric: took 202.235ms to wait for k8s-apps to be running ...
	I0914 09:44:32.487498    1679 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 09:44:32.487569    1679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 09:44:32.495485    1679 system_svc.go:56] duration metric: took 7.979583ms WaitForService to wait for kubelet
	I0914 09:44:32.495501    1679 kubeadm.go:582] duration metric: took 37.550830292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 09:44:32.495515    1679 node_conditions.go:102] verifying NodePressure condition ...
	I0914 09:44:32.637109    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:32.685335    1679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 09:44:32.685345    1679 node_conditions.go:123] node cpu capacity is 2
	I0914 09:44:32.685350    1679 node_conditions.go:105] duration metric: took 189.837583ms to run NodePressure ...
	I0914 09:44:32.685356    1679 start.go:241] waiting for startup goroutines ...
	I0914 09:44:33.138370    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:33.636318    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:34.136837    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:34.637107    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:35.137837    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:35.637856    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:36.136672    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:36.637046    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:37.136775    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:37.636930    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:38.137149    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:38.635179    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:39.137306    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:39.636944    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:40.136839    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:40.697890    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:41.136801    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:41.636672    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:42.136686    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:42.698360    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:43.137021    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:43.636843    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:44.136997    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:44.636761    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:45.136591    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:45.636502    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:46.137172    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:46.636815    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:47.138807    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:47.634817    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:48.136543    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:48.636561    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:49.136659    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:49.639084    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:50.135369    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:50.636569    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:51.136730    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 09:44:51.636456    1679 kapi.go:107] duration metric: took 52.004076791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 09:45:21.307485    1679 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 09:45:21.307497    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:21.808518    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:22.309410    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:22.809928    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:23.309229    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:23.810449    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:24.312176    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:24.806842    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:25.309196    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:25.807042    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:26.095088    1679 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 09:45:26.095103    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:26.309202    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:26.601621    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:26.813896    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:27.095424    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:27.310355    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:27.598855    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:27.809848    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:28.095319    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:28.307758    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:28.599410    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:28.812766    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:29.096212    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:29.308923    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:29.594861    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:29.808456    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:30.097036    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:30.308497    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:30.600326    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:30.813874    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:31.095438    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:31.307907    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:31.596094    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:31.811287    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:32.098758    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:32.310208    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:32.594635    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:32.807822    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:33.094208    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:33.307772    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:33.593976    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:33.808045    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:34.095005    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:34.308876    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:34.595469    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:34.807525    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:35.095437    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:35.307964    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:35.596082    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:35.809516    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:36.095259    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:36.313078    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:36.598764    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:36.814236    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:37.094917    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:37.308665    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:37.599235    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:37.809054    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:38.098947    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:38.307595    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:38.598180    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:38.811726    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:39.097369    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:39.308108    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:39.595046    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:39.807561    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:40.094713    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:40.308729    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:40.595862    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:40.808839    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:41.093634    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:41.308789    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:41.595740    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:41.808605    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:42.098497    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:42.312079    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:42.598711    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:42.814964    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:43.099110    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:43.311713    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:43.593508    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:43.808768    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:44.098387    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:44.310879    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:44.600083    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:44.814549    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:45.099857    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:45.308570    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:45.595096    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:45.809735    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:46.095273    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:46.308411    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:46.597869    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:46.810043    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:47.098042    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:47.312886    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:47.596333    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:47.809855    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:48.098395    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:48.308475    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:48.595753    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:48.812629    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:49.098847    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:49.308592    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:49.594019    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:49.807799    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:50.095556    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:50.308010    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:50.595234    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:50.812395    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:51.095595    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:51.309210    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:51.599948    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:51.813176    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:52.095449    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:52.309498    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:52.594820    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:52.808870    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:53.095941    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:53.308598    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:53.596669    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:53.808600    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:54.096786    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:54.309802    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:54.595388    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:54.812258    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:55.096589    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:55.307201    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:55.594535    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:55.807591    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:56.093699    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:56.307177    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:56.602586    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:56.811716    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:57.098986    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:57.308657    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:57.595826    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:57.808906    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:58.094487    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:58.305793    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:58.595251    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:58.808876    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:59.094796    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:59.308256    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:45:59.593989    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:45:59.807525    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:00.095857    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:00.311000    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:00.599250    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:00.812656    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:01.094487    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:01.308935    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:01.597366    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:01.812114    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:02.100400    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:02.308147    1679 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 09:46:02.308157    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:02.593515    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:02.807891    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:03.093569    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:03.307495    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:03.592803    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:03.807650    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:04.093313    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:04.307373    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:04.593291    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:04.806000    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:05.093265    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:05.306781    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:05.595289    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:05.807913    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:06.093775    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:06.308860    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:06.597666    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:06.807657    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:07.093061    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:07.306807    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:07.593410    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:07.806557    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:08.093684    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:08.307527    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:08.594690    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:08.808424    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:09.093628    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:09.308370    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:09.593938    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:09.807380    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:10.094098    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:10.306267    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:10.594890    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:10.808878    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:11.094401    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:11.311703    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:11.595549    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:11.813249    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:12.100287    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:12.315025    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:12.598029    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:12.813580    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:13.098439    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:13.319134    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:13.593158    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:13.807059    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:14.093355    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:14.308794    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:14.595063    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:14.808067    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:15.093296    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:15.308754    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:15.595386    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:15.811370    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:16.093481    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:16.307558    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:16.594966    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:16.808620    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:17.093480    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:17.309539    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:17.593263    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:17.807234    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:18.093002    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:18.307029    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:18.593590    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:18.810068    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:19.096772    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:19.312631    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:19.593684    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:19.807290    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:20.093882    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:20.307211    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:20.593390    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:20.809271    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:21.094721    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:21.311757    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:21.598348    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:21.814241    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:22.094923    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:22.310001    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:22.600331    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:22.812867    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:23.095293    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:23.314688    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:23.594588    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:23.810646    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:24.094562    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:24.309601    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:24.599222    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:24.809988    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:25.092817    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:25.306943    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:25.592799    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:25.807909    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:26.093093    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:26.306796    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:26.592185    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:26.807149    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:27.092862    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:27.306827    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:27.592878    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:27.807331    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:28.092505    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:28.305602    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:28.592753    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:28.806881    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:29.092864    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:29.306969    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:29.592880    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:29.806568    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:30.092870    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:30.306108    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:30.593016    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:30.804840    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:31.092573    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:31.306905    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:31.592642    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:31.806770    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:32.090745    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:32.310541    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:32.592869    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:32.806840    1679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 09:46:33.093467    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:33.306974    1679 kapi.go:107] duration metric: took 2m34.004402333s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 09:46:33.655026    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:34.092788    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:34.592432    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:35.092792    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:35.592776    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:36.092853    1679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 09:46:36.594904    1679 kapi.go:107] duration metric: took 2m32.506012292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 09:46:36.600259    1679 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-528000 cluster.
	I0914 09:46:36.612164    1679 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 09:46:36.616303    1679 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 09:46:36.619264    1679 out.go:177] * Enabled addons: default-storageclass, yakd, ingress-dns, metrics-server, volcano, cloud-spanner, nvidia-device-plugin, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0914 09:46:36.623204    1679 addons.go:510] duration metric: took 2m41.681880167s for enable addons: enabled=[default-storageclass yakd ingress-dns metrics-server volcano cloud-spanner nvidia-device-plugin inspektor-gadget storage-provisioner storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0914 09:46:36.623230    1679 start.go:246] waiting for cluster config update ...
	I0914 09:46:36.623247    1679 start.go:255] writing updated cluster config ...
	I0914 09:46:36.623801    1679 ssh_runner.go:195] Run: rm -f paused
	I0914 09:46:36.787325    1679 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0914 09:46:36.790141    1679 out.go:201] 
	W0914 09:46:36.794412    1679 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0914 09:46:36.798173    1679 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0914 09:46:36.805223    1679 out.go:177] * Done! kubectl is now configured to use "addons-528000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 14 16:56:18 addons-528000 dockerd[1293]: time="2024-09-14T16:56:18.475190461Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:19 addons-528000 dockerd[1287]: time="2024-09-14T16:56:19.998137579Z" level=info msg="ignoring event" container=cf722dfe73773a00e8e053a1de6ee0f9eb9811b3cb86d6adaab7d52016aa1085 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:56:19 addons-528000 dockerd[1293]: time="2024-09-14T16:56:19.998411155Z" level=info msg="shim disconnected" id=cf722dfe73773a00e8e053a1de6ee0f9eb9811b3cb86d6adaab7d52016aa1085 namespace=moby
	Sep 14 16:56:19 addons-528000 dockerd[1293]: time="2024-09-14T16:56:19.998473259Z" level=warning msg="cleaning up after shim disconnected" id=cf722dfe73773a00e8e053a1de6ee0f9eb9811b3cb86d6adaab7d52016aa1085 namespace=moby
	Sep 14 16:56:19 addons-528000 dockerd[1293]: time="2024-09-14T16:56:19.998479257Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.728853005Z" level=info msg="shim disconnected" id=ef89b8df8dbfd157a71573de751f9b43d77f9d338df912b33606eb2cedd541f3 namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.728908030Z" level=warning msg="cleaning up after shim disconnected" id=ef89b8df8dbfd157a71573de751f9b43d77f9d338df912b33606eb2cedd541f3 namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.728912487Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1287]: time="2024-09-14T16:56:29.729593900Z" level=info msg="ignoring event" container=ef89b8df8dbfd157a71573de751f9b43d77f9d338df912b33606eb2cedd541f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.891400615Z" level=info msg="shim disconnected" id=d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.891434021Z" level=warning msg="cleaning up after shim disconnected" id=d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.891439727Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1287]: time="2024-09-14T16:56:29.891551234Z" level=info msg="ignoring event" container=d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:56:29 addons-528000 dockerd[1287]: time="2024-09-14T16:56:29.930371660Z" level=info msg="ignoring event" container=1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.930466797Z" level=info msg="shim disconnected" id=1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2 namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.930515574Z" level=warning msg="cleaning up after shim disconnected" id=1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2 namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.930519489Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1287]: time="2024-09-14T16:56:29.989036872Z" level=info msg="ignoring event" container=865d073037f70b54a4d1a64fdde775ca8e6ffab969f5c8c9e118e500333f455f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.989104976Z" level=info msg="shim disconnected" id=865d073037f70b54a4d1a64fdde775ca8e6ffab969f5c8c9e118e500333f455f namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.989147879Z" level=warning msg="cleaning up after shim disconnected" id=865d073037f70b54a4d1a64fdde775ca8e6ffab969f5c8c9e118e500333f455f namespace=moby
	Sep 14 16:56:29 addons-528000 dockerd[1293]: time="2024-09-14T16:56:29.989152377Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:30 addons-528000 dockerd[1293]: time="2024-09-14T16:56:30.027362511Z" level=info msg="shim disconnected" id=a61d620aa02d9c7b69103dcc58ffc08007670fb21e5d28618f3086e3835b1e1a namespace=moby
	Sep 14 16:56:30 addons-528000 dockerd[1293]: time="2024-09-14T16:56:30.027501175Z" level=warning msg="cleaning up after shim disconnected" id=a61d620aa02d9c7b69103dcc58ffc08007670fb21e5d28618f3086e3835b1e1a namespace=moby
	Sep 14 16:56:30 addons-528000 dockerd[1293]: time="2024-09-14T16:56:30.027505673Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 16:56:30 addons-528000 dockerd[1287]: time="2024-09-14T16:56:30.027489512Z" level=info msg="ignoring event" container=a61d620aa02d9c7b69103dcc58ffc08007670fb21e5d28618f3086e3835b1e1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5a4f64e6ce20       fc9db2894f4e4                                                                                                                12 seconds ago      Exited              helper-pod                0                   cf722dfe73773       helper-pod-delete-pvc-40cba7ad-1232-4eba-80ce-dc029efeb173
	ad40e6173b366       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              16 seconds ago      Exited              busybox                   0                   d7f3505e17a78       test-local-path
	92a1b8889a78b       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              21 seconds ago      Exited              helper-pod                0                   f186eaf8084be       helper-pod-create-pvc-40cba7ad-1232-4eba-80ce-dc029efeb173
	73fdbc0930943       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            49 seconds ago      Exited              gadget                    7                   90f5561cb8948       gadget-zxdlq
	6e69d5f11cf09       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   2aedc7b600f40       gcp-auth-89d5ffd79-fsr2x
	c4629b897df7d       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             9 minutes ago       Running             controller                0                   eb20940feee8b       ingress-nginx-controller-bc57996ff-ffs8n
	30b1eb6d9480f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              patch                     0                   6da166030e924       ingress-nginx-admission-patch-58p4m
	07d3cb11994e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                    0                   8d02fea2265f2       ingress-nginx-admission-create-9gpcg
	c2fda70c523f2       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   282cdc9d06c8d       cloud-spanner-emulator-769b77f747-6f6d2
	5413cff86bb8e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   a0cb1ef49ab62       local-path-provisioner-86d989889c-9gdxd
	21525de523412       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   9840d6c677948       metrics-server-84c5f94fbc-x9frd
	3c6c07cb5ef9d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   4a9347411afc2       kube-ingress-dns-minikube
	e65caafb6bbcd       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   fa998e00999c4       storage-provisioner
	b9c75992e03af       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   f0eb9beed613b       coredns-7c65d6cfc9-csgbk
	f043a8e8f752e       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   6f1d6258e9f02       kube-proxy-4hs9z
	f279f990e2681       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   f20720935d776       kube-scheduler-addons-528000
	75b08ffc79207       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   76c9735986285       kube-apiserver-addons-528000
	6b40ebce26485       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   95ff73ce9a38b       kube-controller-manager-addons-528000
	dd5540ef485b8       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   58b9ce2f94f26       etcd-addons-528000
	
	
	==> controller_ingress [c4629b897df7] <==
	W0914 16:46:32.302229       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0914 16:46:32.302311       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0914 16:46:32.305189       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0914 16:46:32.376678       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0914 16:46:32.386275       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0914 16:46:32.390378       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0914 16:46:32.396784       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"59e4e493-573e-49ea-aea9-bd02461990aa", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0914 16:46:32.397806       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"50d07555-ac3a-409f-9ac4-240bc43f672b", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0914 16:46:32.397857       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"2dcacec1-5c82-449d-9fcc-8e28d0b8608a", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0914 16:46:33.592821       7 nginx.go:317] "Starting NGINX process"
	I0914 16:46:33.592974       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0914 16:46:33.593157       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0914 16:46:33.593581       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0914 16:46:33.602568       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0914 16:46:33.602821       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-ffs8n"
	I0914 16:46:33.605668       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-ffs8n" node="addons-528000"
	I0914 16:46:33.607837       7 controller.go:213] "Backend successfully reloaded"
	I0914 16:46:33.607890       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0914 16:46:33.608016       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-ffs8n", UID:"93e53a50-16de-4358-9315-f59ee6fe2b12", APIVersion:"v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [b9c75992e03a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.5:55508 - 32741 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168519s
	[INFO] 10.244.0.5:55508 - 24551 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206427s
	[INFO] 10.244.0.5:39771 - 177 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035781s
	[INFO] 10.244.0.5:39771 - 61375 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058092s
	[INFO] 10.244.0.5:52461 - 8356 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030777s
	[INFO] 10.244.0.5:52461 - 53415 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033153s
	[INFO] 10.244.0.5:51257 - 9114 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003282s
	[INFO] 10.244.0.5:51257 - 55961 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039284s
	[INFO] 10.244.0.5:35617 - 24434 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000035364s
	[INFO] 10.244.0.5:35617 - 32636 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000011927s
	[INFO] 10.244.0.5:48615 - 5015 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000015263s
	[INFO] 10.244.0.5:48615 - 8340 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000015097s
	[INFO] 10.244.0.5:44597 - 28881 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000016973s
	[INFO] 10.244.0.5:44597 - 19609 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000012302s
	[INFO] 10.244.0.5:56282 - 33951 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000012218s
	[INFO] 10.244.0.5:56282 - 46238 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000011844s
	[INFO] 10.244.0.24:35578 - 39782 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.008934337s
	[INFO] 10.244.0.24:38881 - 45182 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00022438s
	[INFO] 10.244.0.24:40647 - 42910 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.008965469s
	[INFO] 10.244.0.24:46696 - 13679 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000043134s
	[INFO] 10.244.0.24:44413 - 5600 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004255s
	[INFO] 10.244.0.24:52272 - 18833 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075724s
	[INFO] 10.244.0.24:43324 - 7973 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003702053s
	[INFO] 10.244.0.24:41740 - 4478 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003651668s
	
	
	==> describe nodes <==
	Name:               addons-528000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-528000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-528000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T09_43_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-528000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:43:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-528000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 16:56:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:56:24 +0000   Sat, 14 Sep 2024 16:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:56:24 +0000   Sat, 14 Sep 2024 16:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:56:24 +0000   Sat, 14 Sep 2024 16:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:56:24 +0000   Sat, 14 Sep 2024 16:43:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-528000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 41e24cd4ffed4e768131f7cc76b95402
	  System UUID:                41e24cd4ffed4e768131f7cc76b95402
	  Boot ID:                    23e0a56e-8941-46a5-949b-7a4b0519c5a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-6f6d2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-zxdlq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-fsr2x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-ffs8n    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-csgbk                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-528000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-528000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-528000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4hs9z                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-528000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-x9frd             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-9gdxd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-528000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-528000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-528000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-528000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-528000 event: Registered Node addons-528000 in Controller
	
	
	==> dmesg <==
	[  +5.026902] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.081747] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.806264] kauditd_printk_skb: 22 callbacks suppressed
	[Sep14 16:45] kauditd_printk_skb: 7 callbacks suppressed
	[ +30.495079] kauditd_printk_skb: 21 callbacks suppressed
	[Sep14 16:46] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.000393] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.142395] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.445030] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.346482] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.415721] kauditd_printk_skb: 11 callbacks suppressed
	[Sep14 16:47] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.776874] kauditd_printk_skb: 20 callbacks suppressed
	[ +18.948791] kauditd_printk_skb: 2 callbacks suppressed
	[Sep14 16:50] kauditd_printk_skb: 10 callbacks suppressed
	[Sep14 16:55] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.860190] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.602896] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.116628] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.188727] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.389533] kauditd_printk_skb: 6 callbacks suppressed
	[Sep14 16:56] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.501440] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.286172] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.314150] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [dd5540ef485b] <==
	{"level":"info","ts":"2024-09-14T16:43:46.252257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T16:43:46.252310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T16:43:46.252347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-14T16:43:46.252359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T16:43:46.252366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-14T16:43:46.252388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T16:43:46.252399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-14T16:43:46.253249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:43:46.253446Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:43:46.253236Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-528000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T16:43:46.256244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:43:46.256307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:43:46.256376Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:43:46.256388Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:43:46.256775Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:43:46.257244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T16:43:46.262297Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T16:43:46.262307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T16:43:46.262683Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:43:46.263170Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"warn","ts":"2024-09-14T16:47:01.857221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.585808ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:47:01.870006Z","caller":"traceutil/trace.go:171","msg":"trace[1286373698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1538; }","duration":"223.769089ms","start":"2024-09-14T16:47:01.644381Z","end":"2024-09-14T16:47:01.868150Z","steps":["trace[1286373698] 'range keys from in-memory index tree'  (duration: 202.576223ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:53:46.522353Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1846}
	{"level":"info","ts":"2024-09-14T16:53:46.615378Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1846,"took":"91.769968ms","hash":2684578537,"current-db-size-bytes":8777728,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4808704,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-14T16:53:46.615481Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2684578537,"revision":1846,"compact-revision":-1}
	
	
	==> gcp-auth [6e69d5f11cf0] <==
	2024/09/14 16:46:35 GCP Auth Webhook started!
	2024/09/14 16:46:52 Ready to marshal response ...
	2024/09/14 16:46:52 Ready to write response ...
	2024/09/14 16:46:52 Ready to marshal response ...
	2024/09/14 16:46:52 Ready to write response ...
	2024/09/14 16:47:18 Ready to marshal response ...
	2024/09/14 16:47:18 Ready to write response ...
	2024/09/14 16:47:18 Ready to marshal response ...
	2024/09/14 16:47:18 Ready to write response ...
	2024/09/14 16:47:18 Ready to marshal response ...
	2024/09/14 16:47:18 Ready to write response ...
	2024/09/14 16:55:20 Ready to marshal response ...
	2024/09/14 16:55:20 Ready to write response ...
	2024/09/14 16:55:29 Ready to marshal response ...
	2024/09/14 16:55:29 Ready to write response ...
	2024/09/14 16:55:35 Ready to marshal response ...
	2024/09/14 16:55:35 Ready to write response ...
	2024/09/14 16:56:07 Ready to marshal response ...
	2024/09/14 16:56:07 Ready to write response ...
	2024/09/14 16:56:07 Ready to marshal response ...
	2024/09/14 16:56:07 Ready to write response ...
	2024/09/14 16:56:18 Ready to marshal response ...
	2024/09/14 16:56:18 Ready to write response ...
	
	
	==> kernel <==
	 16:56:30 up 13 min,  0 users,  load average: 0.62, 0.62, 0.46
	Linux addons-528000 5.10.207 #1 SMP PREEMPT Sat Sep 14 04:33:12 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75b08ffc7920] <==
	I0914 16:47:08.845925       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0914 16:47:08.940411       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:47:08.955537       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:47:09.034224       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0914 16:47:09.633817       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0914 16:47:09.772121       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0914 16:47:09.804402       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0914 16:47:09.847750       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0914 16:47:09.882149       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0914 16:47:10.035064       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0914 16:47:10.073519       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0914 16:55:27.816585       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:50.766610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:50.766648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:50.802689       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:50.802727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:50.816499       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:50.816550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:50.830000       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:50.830018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:50.835112       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:50.835340       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0914 16:55:51.830611       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 16:55:51.836145       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0914 16:55:51.880015       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [6b40ebce2648] <==
	E0914 16:55:59.684433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:00.357178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:00.357308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:01.138937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:01.139051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:04.171898       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:04.172008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:56:06.338929       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0914 16:56:07.145079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:07.145182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:09.303099       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:09.303188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:13.026087       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:13.026149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:15.601881       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:15.602000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:56:18.244922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="1.75µs"
	W0914 16:56:21.221016       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:21.221122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:23.607861       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:23.608119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:56:24.357023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-528000"
	W0914 16:56:25.139244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:25.139364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:56:29.864052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.583µs"
	
	
	==> kube-proxy [f043a8e8f752] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 16:43:55.581151       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 16:43:55.587002       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0914 16:43:55.587056       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:43:55.765870       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 16:43:55.765890       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 16:43:55.765906       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:43:55.766590       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:43:55.766704       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:43:55.766710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:43:55.767594       1 config.go:199] "Starting service config controller"
	I0914 16:43:55.767602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:43:55.767612       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:43:55.767614       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:43:55.767805       1 config.go:328] "Starting node config controller"
	I0914 16:43:55.767808       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:43:55.868298       1 shared_informer.go:320] Caches are synced for node config
	I0914 16:43:55.868316       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:43:55.868325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f279f990e268] <==
	W0914 16:43:47.171597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:43:47.172288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:47.171619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 16:43:47.172295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:47.171642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:43:47.172346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:47.171654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:43:47.172355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:47.989474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 16:43:47.990271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:43:47.990422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0914 16:43:47.990318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.010263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:43:48.010688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.035101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:43:48.035172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.059221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:43:48.059313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.096888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 16:43:48.096993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.100956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:43:48.101083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:43:48.166173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:43:48.166362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 16:43:48.374565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 16:56:23 addons-528000 kubelet[2057]: I0914 16:56:23.358835    2057 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d703fc16-8bfb-445b-8334-9e9eeafc2cbe" path="/var/lib/kubelet/pods/d703fc16-8bfb-445b-8334-9e9eeafc2cbe/volumes"
	Sep 14 16:56:24 addons-528000 kubelet[2057]: I0914 16:56:24.352557    2057 scope.go:117] "RemoveContainer" containerID="73fdbc093094367fb9c289213b44638ea06e7805e8009b6e5990af4dd75d23c3"
	Sep 14 16:56:24 addons-528000 kubelet[2057]: E0914 16:56:24.352667    2057 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zxdlq_gadget(02f67fbd-a42a-46f8-bb91-252ea99ccb7b)\"" pod="gadget/gadget-zxdlq" podUID="02f67fbd-a42a-46f8-bb91-252ea99ccb7b"
	Sep 14 16:56:27 addons-528000 kubelet[2057]: E0914 16:56:27.359023    2057 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="74481fff-194d-4eb1-b668-a8cdc4cf12bb"
	Sep 14 16:56:28 addons-528000 kubelet[2057]: E0914 16:56:28.353916    2057 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="52b184f1-33bf-46a5-9023-c1c9cef810b3"
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.851890    2057 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp5wm\" (UniqueName: \"kubernetes.io/projected/52b184f1-33bf-46a5-9023-c1c9cef810b3-kube-api-access-xp5wm\") pod \"52b184f1-33bf-46a5-9023-c1c9cef810b3\" (UID: \"52b184f1-33bf-46a5-9023-c1c9cef810b3\") "
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.851917    2057 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/52b184f1-33bf-46a5-9023-c1c9cef810b3-gcp-creds\") pod \"52b184f1-33bf-46a5-9023-c1c9cef810b3\" (UID: \"52b184f1-33bf-46a5-9023-c1c9cef810b3\") "
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.851977    2057 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52b184f1-33bf-46a5-9023-c1c9cef810b3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "52b184f1-33bf-46a5-9023-c1c9cef810b3" (UID: "52b184f1-33bf-46a5-9023-c1c9cef810b3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.853922    2057 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b184f1-33bf-46a5-9023-c1c9cef810b3-kube-api-access-xp5wm" (OuterVolumeSpecName: "kube-api-access-xp5wm") pod "52b184f1-33bf-46a5-9023-c1c9cef810b3" (UID: "52b184f1-33bf-46a5-9023-c1c9cef810b3"). InnerVolumeSpecName "kube-api-access-xp5wm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.953013    2057 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/52b184f1-33bf-46a5-9023-c1c9cef810b3-gcp-creds\") on node \"addons-528000\" DevicePath \"\""
	Sep 14 16:56:29 addons-528000 kubelet[2057]: I0914 16:56:29.953026    2057 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xp5wm\" (UniqueName: \"kubernetes.io/projected/52b184f1-33bf-46a5-9023-c1c9cef810b3-kube-api-access-xp5wm\") on node \"addons-528000\" DevicePath \"\""
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.110430    2057 scope.go:117] "RemoveContainer" containerID="1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.125610    2057 scope.go:117] "RemoveContainer" containerID="1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: E0914 16:56:30.127446    2057 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2" containerID="1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.127468    2057 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2"} err="failed to get container status \"1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1acd3586fdeff6fde8f0e8755ab922d5257a9fd06370b06e624c8f7075bc81e2"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.127480    2057 scope.go:117] "RemoveContainer" containerID="d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.135916    2057 scope.go:117] "RemoveContainer" containerID="d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: E0914 16:56:30.136604    2057 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f" containerID="d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.136620    2057 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f"} err="failed to get container status \"d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f\": rpc error: code = Unknown desc = Error response from daemon: No such container: d3470bc290cd15c7dfd40b7a864940adebc6275ad8b88efd549d5014218c049f"
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.154240    2057 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrtm\" (UniqueName: \"kubernetes.io/projected/4a35ee5a-620a-469e-a679-2174aad28170-kube-api-access-dgrtm\") pod \"4a35ee5a-620a-469e-a679-2174aad28170\" (UID: \"4a35ee5a-620a-469e-a679-2174aad28170\") "
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.154267    2057 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29z5r\" (UniqueName: \"kubernetes.io/projected/321fe437-23fd-4025-9847-6fd69811ce7a-kube-api-access-29z5r\") pod \"321fe437-23fd-4025-9847-6fd69811ce7a\" (UID: \"321fe437-23fd-4025-9847-6fd69811ce7a\") "
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.154898    2057 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a35ee5a-620a-469e-a679-2174aad28170-kube-api-access-dgrtm" (OuterVolumeSpecName: "kube-api-access-dgrtm") pod "4a35ee5a-620a-469e-a679-2174aad28170" (UID: "4a35ee5a-620a-469e-a679-2174aad28170"). InnerVolumeSpecName "kube-api-access-dgrtm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.154976    2057 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321fe437-23fd-4025-9847-6fd69811ce7a-kube-api-access-29z5r" (OuterVolumeSpecName: "kube-api-access-29z5r") pod "321fe437-23fd-4025-9847-6fd69811ce7a" (UID: "321fe437-23fd-4025-9847-6fd69811ce7a"). InnerVolumeSpecName "kube-api-access-29z5r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.254466    2057 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dgrtm\" (UniqueName: \"kubernetes.io/projected/4a35ee5a-620a-469e-a679-2174aad28170-kube-api-access-dgrtm\") on node \"addons-528000\" DevicePath \"\""
	Sep 14 16:56:30 addons-528000 kubelet[2057]: I0914 16:56:30.254505    2057 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-29z5r\" (UniqueName: \"kubernetes.io/projected/321fe437-23fd-4025-9847-6fd69811ce7a-kube-api-access-29z5r\") on node \"addons-528000\" DevicePath \"\""
	
	
	==> storage-provisioner [e65caafb6bbc] <==
	I0914 16:43:58.556377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:43:58.590042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:43:58.590064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:43:58.657777       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:43:58.659819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-528000_88700de2-1cde-49f9-a147-01bbcafd993c!
	I0914 16:43:58.682720       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"64712dba-ed3c-4b03-a738-b8d9cd2def37", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-528000_88700de2-1cde-49f9-a147-01bbcafd993c became leader
	I0914 16:43:58.764409       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-528000_88700de2-1cde-49f9-a147-01bbcafd993c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-528000 -n addons-528000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-528000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-9gpcg ingress-nginx-admission-patch-58p4m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-528000 describe pod busybox ingress-nginx-admission-create-9gpcg ingress-nginx-admission-patch-58p4m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-528000 describe pod busybox ingress-nginx-admission-create-9gpcg ingress-nginx-admission-patch-58p4m: exit status 1 (42.40825ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-528000/192.168.105.2
	Start Time:       Sat, 14 Sep 2024 09:47:18 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdqbv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zdqbv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m12s                   default-scheduler  Successfully assigned default/busybox to addons-528000
	  Normal   Pulling    7m49s (x4 over 9m12s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m12s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m12s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m22s (x6 over 9m12s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x20 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9gpcg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-58p4m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-528000 describe pod busybox ingress-nginx-admission-create-9gpcg ingress-nginx-admission-patch-58p4m: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.33s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-811000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-811000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.882747084s)

                                                
                                                
-- stdout --
	* [cert-options-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-811000" primary control-plane node in "cert-options-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-811000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-811000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-811000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.557083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-811000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-811000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-811000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-811000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-811000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-811000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.052208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-811000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-811000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-811000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-811000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-811000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-14 10:28:47.900982 -0700 PDT m=+2754.197253042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-811000 -n cert-options-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-811000 -n cert-options-811000: exit status 7 (30.942917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-811000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-811000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.951868125s)

                                                
                                                
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
E0914 10:31:47.845436    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.219951375s)

                                                
                                                
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-528000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-528000" primary control-plane node in "cert-expiration-528000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-14 10:31:47.955124 -0700 PDT m=+2934.258962251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-528000 -n cert-expiration-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-528000 -n cert-expiration-528000: exit status 7 (65.8155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-528000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-528000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.029727042s)

                                                
                                                
-- stdout --
	* [docker-flags-413000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-413000" primary control-plane node in "docker-flags-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:28:27.620833    4497 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:28:27.620953    4497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:27.620956    4497 out.go:358] Setting ErrFile to fd 2...
	I0914 10:28:27.620959    4497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:27.621078    4497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:28:27.622109    4497 out.go:352] Setting JSON to false
	I0914 10:28:27.638106    4497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3470,"bootTime":1726331437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:28:27.638207    4497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:28:27.645729    4497 out.go:177] * [docker-flags-413000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:28:27.653506    4497 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:28:27.653539    4497 notify.go:220] Checking for updates...
	I0914 10:28:27.660498    4497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:28:27.663517    4497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:28:27.666522    4497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:28:27.669471    4497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:28:27.672534    4497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:28:27.675888    4497 config.go:182] Loaded profile config "force-systemd-flag-203000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:27.675961    4497 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:27.676001    4497 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:28:27.680492    4497 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:28:27.687488    4497 start.go:297] selected driver: qemu2
	I0914 10:28:27.687494    4497 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:28:27.687502    4497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:28:27.689639    4497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:28:27.692474    4497 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:28:27.695583    4497 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0914 10:28:27.695607    4497 cni.go:84] Creating CNI manager for ""
	I0914 10:28:27.695638    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:28:27.695647    4497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:28:27.695681    4497 start.go:340] cluster config:
	{Name:docker-flags-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:28:27.699295    4497 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:28:27.706465    4497 out.go:177] * Starting "docker-flags-413000" primary control-plane node in "docker-flags-413000" cluster
	I0914 10:28:27.710431    4497 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:28:27.710447    4497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:28:27.710457    4497 cache.go:56] Caching tarball of preloaded images
	I0914 10:28:27.710515    4497 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:28:27.710521    4497 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:28:27.710579    4497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/docker-flags-413000/config.json ...
	I0914 10:28:27.710590    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/docker-flags-413000/config.json: {Name:mk9a4de8fbd2c80bb1a6ce3ecc250c87b9a31171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:28:27.710880    4497 start.go:360] acquireMachinesLock for docker-flags-413000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:27.710918    4497 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "docker-flags-413000"
	I0914 10:28:27.710932    4497 start.go:93] Provisioning new machine with config: &{Name:docker-flags-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:27.710968    4497 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:27.718476    4497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:27.735768    4497 start.go:159] libmachine.API.Create for "docker-flags-413000" (driver="qemu2")
	I0914 10:28:27.735799    4497 client.go:168] LocalClient.Create starting
	I0914 10:28:27.735855    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:27.735883    4497 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:27.735893    4497 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:27.735931    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:27.735954    4497 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:27.735961    4497 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:27.736308    4497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:27.919840    4497 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:28.072765    4497 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:28.072772    4497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:28.072953    4497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:28.082592    4497 main.go:141] libmachine: STDOUT: 
	I0914 10:28:28.082613    4497 main.go:141] libmachine: STDERR: 
	I0914 10:28:28.082669    4497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2 +20000M
	I0914 10:28:28.090576    4497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:28.090592    4497 main.go:141] libmachine: STDERR: 
	I0914 10:28:28.090604    4497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:28.090609    4497 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:28.090619    4497 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:28.090647    4497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:8a:1d:87:6c:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:28.092254    4497 main.go:141] libmachine: STDOUT: 
	I0914 10:28:28.092269    4497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:28.092289    4497 client.go:171] duration metric: took 356.499084ms to LocalClient.Create
	I0914 10:28:30.094375    4497 start.go:128] duration metric: took 2.383490667s to createHost
	I0914 10:28:30.094418    4497 start.go:83] releasing machines lock for "docker-flags-413000", held for 2.383591959s
	W0914 10:28:30.094468    4497 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:30.116691    4497 out.go:177] * Deleting "docker-flags-413000" in qemu2 ...
	W0914 10:28:30.141137    4497 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:30.141154    4497 start.go:729] Will try again in 5 seconds ...
	I0914 10:28:35.143174    4497 start.go:360] acquireMachinesLock for docker-flags-413000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:35.154549    4497 start.go:364] duration metric: took 11.236583ms to acquireMachinesLock for "docker-flags-413000"
	I0914 10:28:35.154746    4497 start.go:93] Provisioning new machine with config: &{Name:docker-flags-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:35.155062    4497 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:35.171688    4497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:35.221882    4497 start.go:159] libmachine.API.Create for "docker-flags-413000" (driver="qemu2")
	I0914 10:28:35.221935    4497 client.go:168] LocalClient.Create starting
	I0914 10:28:35.222040    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:35.222098    4497 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:35.222114    4497 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:35.222186    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:35.222235    4497 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:35.222248    4497 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:35.222957    4497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:35.400846    4497 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:35.543060    4497 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:35.543068    4497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:35.543264    4497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:35.553012    4497 main.go:141] libmachine: STDOUT: 
	I0914 10:28:35.553029    4497 main.go:141] libmachine: STDERR: 
	I0914 10:28:35.553092    4497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2 +20000M
	I0914 10:28:35.560983    4497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:35.560999    4497 main.go:141] libmachine: STDERR: 
	I0914 10:28:35.561014    4497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:35.561018    4497 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:35.561030    4497 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:35.561064    4497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:3b:dd:de:de:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/docker-flags-413000/disk.qcow2
	I0914 10:28:35.562719    4497 main.go:141] libmachine: STDOUT: 
	I0914 10:28:35.562740    4497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:35.562756    4497 client.go:171] duration metric: took 340.829834ms to LocalClient.Create
	I0914 10:28:37.564836    4497 start.go:128] duration metric: took 2.409842459s to createHost
	I0914 10:28:37.564891    4497 start.go:83] releasing machines lock for "docker-flags-413000", held for 2.410417625s
	W0914 10:28:37.565305    4497 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:37.580864    4497 out.go:201] 
	W0914 10:28:37.595185    4497 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:28:37.595221    4497 out.go:270] * 
	* 
	W0914 10:28:37.597031    4497 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:28:37.606888    4497 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.452417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-413000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-413000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-413000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-413000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-413000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-413000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.682125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-413000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-413000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-413000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-413000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-14 10:28:37.753738 -0700 PDT m=+2744.049581501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-413000 -n docker-flags-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-413000 -n docker-flags-413000: exit status 7 (28.952834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-413000
--- FAIL: TestDockerFlags (10.27s)

                                                
                                    
x
+
TestForceSystemdFlag (10.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-203000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-203000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.134048667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-203000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-203000" primary control-plane node in "force-systemd-flag-203000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-203000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:28:22.485497    4474 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:28:22.485628    4474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:22.485635    4474 out.go:358] Setting ErrFile to fd 2...
	I0914 10:28:22.485637    4474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:22.485781    4474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:28:22.486823    4474 out.go:352] Setting JSON to false
	I0914 10:28:22.503052    4474 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3465,"bootTime":1726331437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:28:22.503121    4474 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:28:22.510784    4474 out.go:177] * [force-systemd-flag-203000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:28:22.518713    4474 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:28:22.518748    4474 notify.go:220] Checking for updates...
	I0914 10:28:22.528668    4474 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:28:22.531692    4474 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:28:22.534729    4474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:28:22.536312    4474 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:28:22.539666    4474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:28:22.543083    4474 config.go:182] Loaded profile config "force-systemd-env-788000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:22.543162    4474 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:22.543219    4474 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:28:22.547590    4474 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:28:22.554697    4474 start.go:297] selected driver: qemu2
	I0914 10:28:22.554703    4474 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:28:22.554708    4474 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:28:22.557120    4474 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:28:22.559757    4474 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:28:22.562792    4474 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 10:28:22.562808    4474 cni.go:84] Creating CNI manager for ""
	I0914 10:28:22.562830    4474 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:28:22.562836    4474 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:28:22.562863    4474 start.go:340] cluster config:
	{Name:force-systemd-flag-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:28:22.566692    4474 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:28:22.573706    4474 out.go:177] * Starting "force-systemd-flag-203000" primary control-plane node in "force-systemd-flag-203000" cluster
	I0914 10:28:22.577705    4474 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:28:22.577723    4474 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:28:22.577734    4474 cache.go:56] Caching tarball of preloaded images
	I0914 10:28:22.577812    4474 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:28:22.577818    4474 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:28:22.577883    4474 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/force-systemd-flag-203000/config.json ...
	I0914 10:28:22.577894    4474 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/force-systemd-flag-203000/config.json: {Name:mke0e374df575c56a51b79d91a473ce998c6ba4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:28:22.578120    4474 start.go:360] acquireMachinesLock for force-systemd-flag-203000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:22.578155    4474 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "force-systemd-flag-203000"
	I0914 10:28:22.578168    4474 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-203000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:22.578193    4474 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:22.585714    4474 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:22.604053    4474 start.go:159] libmachine.API.Create for "force-systemd-flag-203000" (driver="qemu2")
	I0914 10:28:22.604080    4474 client.go:168] LocalClient.Create starting
	I0914 10:28:22.604147    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:22.604180    4474 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:22.604188    4474 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:22.604226    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:22.604250    4474 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:22.604258    4474 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:22.604727    4474 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:22.766252    4474 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:22.820856    4474 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:22.820861    4474 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:22.821048    4474 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:22.830198    4474 main.go:141] libmachine: STDOUT: 
	I0914 10:28:22.830223    4474 main.go:141] libmachine: STDERR: 
	I0914 10:28:22.830285    4474 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2 +20000M
	I0914 10:28:22.838153    4474 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:22.838167    4474 main.go:141] libmachine: STDERR: 
	I0914 10:28:22.838183    4474 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:22.838191    4474 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:22.838202    4474 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:22.838227    4474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:9e:1d:02:80:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:22.839914    4474 main.go:141] libmachine: STDOUT: 
	I0914 10:28:22.839925    4474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:22.839948    4474 client.go:171] duration metric: took 235.868916ms to LocalClient.Create
	I0914 10:28:24.842022    4474 start.go:128] duration metric: took 2.263904541s to createHost
	I0914 10:28:24.842072    4474 start.go:83] releasing machines lock for "force-systemd-flag-203000", held for 2.264003167s
	W0914 10:28:24.842123    4474 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:24.869247    4474 out.go:177] * Deleting "force-systemd-flag-203000" in qemu2 ...
	W0914 10:28:24.895240    4474 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:24.895262    4474 start.go:729] Will try again in 5 seconds ...
	I0914 10:28:29.897280    4474 start.go:360] acquireMachinesLock for force-systemd-flag-203000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:30.094586    4474 start.go:364] duration metric: took 197.194041ms to acquireMachinesLock for "force-systemd-flag-203000"
	I0914 10:28:30.094678    4474 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-203000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:30.094935    4474 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:30.106583    4474 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:30.155180    4474 start.go:159] libmachine.API.Create for "force-systemd-flag-203000" (driver="qemu2")
	I0914 10:28:30.155234    4474 client.go:168] LocalClient.Create starting
	I0914 10:28:30.155380    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:30.155462    4474 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:30.155481    4474 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:30.155556    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:30.155610    4474 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:30.155628    4474 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:30.156148    4474 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:30.340613    4474 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:30.516565    4474 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:30.516572    4474 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:30.516784    4474 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:30.526251    4474 main.go:141] libmachine: STDOUT: 
	I0914 10:28:30.526274    4474 main.go:141] libmachine: STDERR: 
	I0914 10:28:30.526331    4474 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2 +20000M
	I0914 10:28:30.534504    4474 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:30.534518    4474 main.go:141] libmachine: STDERR: 
	I0914 10:28:30.534537    4474 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:30.534543    4474 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:30.534550    4474 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:30.534578    4474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:d6:85:64:ff:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-flag-203000/disk.qcow2
	I0914 10:28:30.536234    4474 main.go:141] libmachine: STDOUT: 
	I0914 10:28:30.536247    4474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:30.536262    4474 client.go:171] duration metric: took 381.037542ms to LocalClient.Create
	I0914 10:28:32.537232    4474 start.go:128] duration metric: took 2.44236375s to createHost
	I0914 10:28:32.537300    4474 start.go:83] releasing machines lock for "force-systemd-flag-203000", held for 2.442779667s
	W0914 10:28:32.537623    4474 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-203000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-203000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:32.556782    4474 out.go:201] 
	W0914 10:28:32.564682    4474 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:28:32.564701    4474 out.go:270] * 
	* 
	W0914 10:28:32.566767    4474 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:28:32.576620    4474 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-203000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-203000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-203000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.505042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-203000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-203000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-203000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-14 10:28:32.674604 -0700 PDT m=+2738.970232751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-203000 -n force-systemd-flag-203000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-203000 -n force-systemd-flag-203000: exit status 7 (34.264125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-203000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-203000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-203000
--- FAIL: TestForceSystemdFlag (10.33s)

                                                
                                    
x
+
TestForceSystemdEnv (10.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-788000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-788000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.279188667s)

                                                
                                                
-- stdout --
	* [force-systemd-env-788000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-788000" primary control-plane node in "force-systemd-env-788000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-788000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:28:17.154394    4440 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:28:17.154582    4440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:17.154585    4440 out.go:358] Setting ErrFile to fd 2...
	I0914 10:28:17.154588    4440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:28:17.154713    4440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:28:17.155743    4440 out.go:352] Setting JSON to false
	I0914 10:28:17.172622    4440 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3460,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:28:17.172717    4440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:28:17.180067    4440 out.go:177] * [force-systemd-env-788000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:28:17.189896    4440 notify.go:220] Checking for updates...
	I0914 10:28:17.194937    4440 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:28:17.202848    4440 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:28:17.210833    4440 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:28:17.218691    4440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:28:17.226843    4440 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:28:17.233798    4440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0914 10:28:17.238258    4440 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:28:17.238301    4440 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:28:17.242814    4440 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:28:17.250833    4440 start.go:297] selected driver: qemu2
	I0914 10:28:17.250839    4440 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:28:17.250847    4440 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:28:17.253374    4440 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:28:17.257827    4440 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:28:17.261951    4440 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 10:28:17.261968    4440 cni.go:84] Creating CNI manager for ""
	I0914 10:28:17.261999    4440 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:28:17.262004    4440 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:28:17.262046    4440 start.go:340] cluster config:
	{Name:force-systemd-env-788000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:28:17.265980    4440 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:28:17.273855    4440 out.go:177] * Starting "force-systemd-env-788000" primary control-plane node in "force-systemd-env-788000" cluster
	I0914 10:28:17.277792    4440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:28:17.277808    4440 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:28:17.277818    4440 cache.go:56] Caching tarball of preloaded images
	I0914 10:28:17.277877    4440 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:28:17.277882    4440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:28:17.277934    4440 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/force-systemd-env-788000/config.json ...
	I0914 10:28:17.277945    4440 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/force-systemd-env-788000/config.json: {Name:mkd5264446ff3d2f7e800a35b33278c563d9311a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:28:17.278144    4440 start.go:360] acquireMachinesLock for force-systemd-env-788000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:17.278177    4440 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "force-systemd-env-788000"
	I0914 10:28:17.278187    4440 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-788000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:17.278219    4440 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:17.285800    4440 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:17.302283    4440 start.go:159] libmachine.API.Create for "force-systemd-env-788000" (driver="qemu2")
	I0914 10:28:17.302325    4440 client.go:168] LocalClient.Create starting
	I0914 10:28:17.302396    4440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:17.302426    4440 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:17.302437    4440 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:17.302476    4440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:17.302501    4440 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:17.302512    4440 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:17.302887    4440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:17.508996    4440 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:17.637919    4440 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:17.637934    4440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:17.638142    4440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:17.647809    4440 main.go:141] libmachine: STDOUT: 
	I0914 10:28:17.647834    4440 main.go:141] libmachine: STDERR: 
	I0914 10:28:17.647908    4440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2 +20000M
	I0914 10:28:17.656758    4440 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:17.656782    4440 main.go:141] libmachine: STDERR: 
	I0914 10:28:17.656795    4440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:17.656808    4440 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:17.656824    4440 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:17.656858    4440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:2d:2a:ff:ca:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:17.658796    4440 main.go:141] libmachine: STDOUT: 
	I0914 10:28:17.658812    4440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:17.658833    4440 client.go:171] duration metric: took 356.516208ms to LocalClient.Create
	I0914 10:28:19.660961    4440 start.go:128] duration metric: took 2.382811292s to createHost
	I0914 10:28:19.661052    4440 start.go:83] releasing machines lock for "force-systemd-env-788000", held for 2.382966333s
	W0914 10:28:19.661098    4440 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:19.667483    4440 out.go:177] * Deleting "force-systemd-env-788000" in qemu2 ...
	W0914 10:28:19.706131    4440 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:19.706160    4440 start.go:729] Will try again in 5 seconds ...
	I0914 10:28:24.708221    4440 start.go:360] acquireMachinesLock for force-systemd-env-788000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:28:24.842189    4440 start.go:364] duration metric: took 133.866292ms to acquireMachinesLock for "force-systemd-env-788000"
	I0914 10:28:24.842324    4440 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-788000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:28:24.842691    4440 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:28:24.859280    4440 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 10:28:24.909429    4440 start.go:159] libmachine.API.Create for "force-systemd-env-788000" (driver="qemu2")
	I0914 10:28:24.909477    4440 client.go:168] LocalClient.Create starting
	I0914 10:28:24.909602    4440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:28:24.909666    4440 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:24.909684    4440 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:24.909740    4440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:28:24.909783    4440 main.go:141] libmachine: Decoding PEM data...
	I0914 10:28:24.909795    4440 main.go:141] libmachine: Parsing certificate...
	I0914 10:28:24.910478    4440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:28:25.104246    4440 main.go:141] libmachine: Creating SSH key...
	I0914 10:28:25.321536    4440 main.go:141] libmachine: Creating Disk image...
	I0914 10:28:25.321545    4440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:28:25.321787    4440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:25.332057    4440 main.go:141] libmachine: STDOUT: 
	I0914 10:28:25.332168    4440 main.go:141] libmachine: STDERR: 
	I0914 10:28:25.332254    4440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2 +20000M
	I0914 10:28:25.341020    4440 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:28:25.341115    4440 main.go:141] libmachine: STDERR: 
	I0914 10:28:25.341131    4440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:25.341135    4440 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:28:25.341142    4440 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:28:25.341172    4440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:bf:cf:1a:a8:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/force-systemd-env-788000/disk.qcow2
	I0914 10:28:25.342899    4440 main.go:141] libmachine: STDOUT: 
	I0914 10:28:25.342911    4440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:28:25.342926    4440 client.go:171] duration metric: took 433.462333ms to LocalClient.Create
	I0914 10:28:27.345140    4440 start.go:128] duration metric: took 2.502478084s to createHost
	I0914 10:28:27.345261    4440 start.go:83] releasing machines lock for "force-systemd-env-788000", held for 2.50312525s
	W0914 10:28:27.345645    4440 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-788000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-788000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:28:27.364816    4440 out.go:201] 
	W0914 10:28:27.374598    4440 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:28:27.374624    4440 out.go:270] * 
	* 
	W0914 10:28:27.378328    4440 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:28:27.388508    4440 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-788000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-788000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-788000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.708292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-788000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-788000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-788000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-14 10:28:27.482526 -0700 PDT m=+2733.777936042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-788000 -n force-systemd-env-788000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-788000 -n force-systemd-env-788000: exit status 7 (33.719792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-788000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-788000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-788000
--- FAIL: TestForceSystemdEnv (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-855000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-855000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xsmqz" [3532b3c7-e16c-4829-b380-324d0abedf3d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-xsmqz" [3532b3c7-e16c-4829-b380-324d0abedf3d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004932208s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32104
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32104: Get "http://192.168.105.4:32104": dial tcp 192.168.105.4:32104: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-855000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-xsmqz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-855000/192.168.105.4
Start Time:       Sat, 14 Sep 2024 10:02:03 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://cdbdee5af3fb3e8a38e4a0eb6b80018ab28df112d7363b770e7ce4e5a68286a0
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 14 Sep 2024 10:02:17 -0700
Finished:     Sat, 14 Sep 2024 10:02:17 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cnm85 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cnm85:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  27s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-xsmqz to functional-855000
Normal   Pulled     14s (x3 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 27s)  kubelet            Started container echoserver-arm
Warning  BackOff    2s (x3 over 26s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-xsmqz_default(3532b3c7-e16c-4829-b380-324d0abedf3d)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-855000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-855000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.8.157
IPs:                      10.98.8.157
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32104/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-855000 -n functional-855000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-855000 addons list                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	| addons  | functional-855000 addons list                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-855000 service                                                                                            | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh -- ls                                                                                          | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh cat                                                                                            | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | /mount-9p/test-1726333344768753000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh stat                                                                                           | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh stat                                                                                           | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh sudo                                                                                           | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3386248699/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh -- ls                                                                                          | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh sudo                                                                                           | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-855000 ssh findmnt                                                                                        | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT | 14 Sep 24 10:02 PDT |
	|         | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start   | -p functional-855000                                                                                                 | functional-855000 | jenkins | v1.34.0 | 14 Sep 24 10:02 PDT |                     |
	|         | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 10:02:31
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	
	==> Docker <==
	Sep 14 17:02:17 functional-855000 dockerd[5812]: time="2024-09-14T17:02:17.453842267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:17 functional-855000 cri-dockerd[6059]: time="2024-09-14T17:02:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d928e8641712afc0efc05286d29d678aa0297b27bde804eeacae22cf97b6d895/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 17:02:18 functional-855000 cri-dockerd[6059]: time="2024-09-14T17:02:18Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 14 17:02:18 functional-855000 dockerd[5812]: time="2024-09-14T17:02:18.273029685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 17:02:18 functional-855000 dockerd[5812]: time="2024-09-14T17:02:18.273071140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 17:02:18 functional-855000 dockerd[5812]: time="2024-09-14T17:02:18.273079890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:18 functional-855000 dockerd[5812]: time="2024-09-14T17:02:18.273196255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:25 functional-855000 dockerd[5812]: time="2024-09-14T17:02:25.971931769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 17:02:25 functional-855000 dockerd[5812]: time="2024-09-14T17:02:25.971976349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 17:02:25 functional-855000 dockerd[5812]: time="2024-09-14T17:02:25.971981724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:25 functional-855000 dockerd[5812]: time="2024-09-14T17:02:25.972195457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:26 functional-855000 cri-dockerd[6059]: time="2024-09-14T17:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/46c4bf24f3b853f4db96c25e99135fd704165624431c42f6b88c19fad668637d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 17:02:27 functional-855000 cri-dockerd[6059]: time="2024-09-14T17:02:27Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.472077852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.472112307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.472117515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.472157387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 17:02:27 functional-855000 dockerd[5806]: time="2024-09-14T17:02:27.504476133Z" level=info msg="ignoring event" container=59e07d734576d1232fbf4f326b32f743378799ca62d2683339bccc5a3ede020f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.504660244Z" level=info msg="shim disconnected" id=59e07d734576d1232fbf4f326b32f743378799ca62d2683339bccc5a3ede020f namespace=moby
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.504707448Z" level=warning msg="cleaning up after shim disconnected" id=59e07d734576d1232fbf4f326b32f743378799ca62d2683339bccc5a3ede020f namespace=moby
	Sep 14 17:02:27 functional-855000 dockerd[5812]: time="2024-09-14T17:02:27.504712865Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 17:02:29 functional-855000 dockerd[5812]: time="2024-09-14T17:02:29.334352891Z" level=info msg="shim disconnected" id=46c4bf24f3b853f4db96c25e99135fd704165624431c42f6b88c19fad668637d namespace=moby
	Sep 14 17:02:29 functional-855000 dockerd[5806]: time="2024-09-14T17:02:29.334404971Z" level=info msg="ignoring event" container=46c4bf24f3b853f4db96c25e99135fd704165624431c42f6b88c19fad668637d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 17:02:29 functional-855000 dockerd[5812]: time="2024-09-14T17:02:29.334823021Z" level=warning msg="cleaning up after shim disconnected" id=46c4bf24f3b853f4db96c25e99135fd704165624431c42f6b88c19fad668637d namespace=moby
	Sep 14 17:02:29 functional-855000 dockerd[5812]: time="2024-09-14T17:02:29.334835520Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	59e07d734576d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   46c4bf24f3b85       busybox-mount
	c1b9d1eb43d15       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         14 seconds ago       Running             myfrontend                0                   d928e8641712a       sp-pod
	cdbdee5af3fb3       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            2                   e0cd5fc215048       hello-node-connect-65d86f57f4-xsmqz
	eb35fab72bc8a       72565bf5bbedf                                                                                         22 seconds ago       Exited              echoserver-arm            2                   133c5e15b193d       hello-node-64b4f8f9ff-ppc4j
	6153210528e91       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         36 seconds ago       Running             nginx                     0                   9980bdbd1bbac       nginx-svc
	e0ca2f8a9853b       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   cfa511c9d9f59       coredns-7c65d6cfc9-jm9jv
	82c6ce49f9854       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   140ad055e059c       kube-proxy-xk6mq
	0a69a48c0646f       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   00b54429d8001       storage-provisioner
	34362d4141408       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   55c3f125cf70c       kube-controller-manager-functional-855000
	fef63d3cb8ae0       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   eec8987a11f79       etcd-functional-855000
	b1c97afe4afde       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   d4b08c68a99d2       kube-scheduler-functional-855000
	dd13db8ee2681       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   d2ea6fc01edf7       kube-apiserver-functional-855000
	883cf406c6914       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       2                   c5514f94e5629       storage-provisioner
	adfc8c1380ca6       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   6df91ae335341       coredns-7c65d6cfc9-jm9jv
	82b3e6d0388c5       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   f85bf3af7f6f2       kube-proxy-xk6mq
	17fba9077db55       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   6206c580678b9       kube-controller-manager-functional-855000
	8fbb4c52c0ad3       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   f1e690d6f3633       etcd-functional-855000
	32853c63e221a       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   4f084af3eb905       kube-scheduler-functional-855000
	
	
	==> coredns [adfc8c1380ca] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50509 - 35862 "HINFO IN 4816443957072998130.7919909527798556254. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01161131s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[322278102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:00:16.647) (total time: 30000ms):
	Trace[322278102]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:00:46.647)
	Trace[322278102]: [30.000554403s] [30.000554403s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1312762317]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:00:16.647) (total time: 30000ms):
	Trace[1312762317]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:00:46.647)
	Trace[1312762317]: [30.000703421s] [30.000703421s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1715282300]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:00:16.647) (total time: 30000ms):
	Trace[1715282300]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:00:46.648)
	Trace[1715282300]: [30.000734107s] [30.000734107s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e0ca2f8a9853] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46546 - 30120 "HINFO IN 1310094447118423126.2284728402255112020. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021450506s
	[INFO] 10.244.0.1:19397 - 5468 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000105908s
	[INFO] 10.244.0.1:41533 - 41988 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00009141s
	[INFO] 10.244.0.1:29347 - 30915 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000035205s
	[INFO] 10.244.0.1:32262 - 11773 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001093494s
	[INFO] 10.244.0.1:41844 - 12660 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000063995s
	[INFO] 10.244.0.1:36453 - 14448 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000094659s
	
	
	==> describe nodes <==
	Name:               functional-855000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-855000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=functional-855000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T09_59_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:59:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-855000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:02:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:02:21 +0000   Sat, 14 Sep 2024 16:59:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:02:21 +0000   Sat, 14 Sep 2024 16:59:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:02:21 +0000   Sat, 14 Sep 2024 16:59:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:02:21 +0000   Sat, 14 Sep 2024 16:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-855000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 b37629b7944c4b75bf73044541d366d3
	  System UUID:                b37629b7944c4b75bf73044541d366d3
	  Boot ID:                    45957412-2db5-4011-9083-191e5ae1db21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-ppc4j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     hello-node-connect-65d86f57f4-xsmqz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-7c65d6cfc9-jm9jv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m45s
	  kube-system                 etcd-functional-855000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m51s
	  kube-system                 kube-apiserver-functional-855000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-functional-855000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-proxy-xk6mq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-scheduler-functional-855000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  Starting                 70s                    kube-proxy       
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s                  kubelet          Node functional-855000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m51s                  kubelet          Node functional-855000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s                  kubelet          Node functional-855000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m47s                  kubelet          Node functional-855000 status is now: NodeReady
	  Normal  RegisteredNode           2m46s                  node-controller  Node functional-855000 event: Registered Node functional-855000 in Controller
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node functional-855000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node functional-855000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m20s)  kubelet          Node functional-855000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m14s                  node-controller  Node functional-855000 event: Registered Node functional-855000 in Controller
	  Normal  Starting                 75s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)      kubelet          Node functional-855000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)      kubelet          Node functional-855000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)      kubelet          Node functional-855000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node functional-855000 event: Registered Node functional-855000 in Controller
	
	
	==> dmesg <==
	[  +0.981631] systemd-fstab-generator[3840]: Ignoring "noauto" option for root device
	[  +4.427841] kauditd_printk_skb: 199 callbacks suppressed
	[ +13.752323] kauditd_printk_skb: 34 callbacks suppressed
	[ +20.217293] systemd-fstab-generator[4881]: Ignoring "noauto" option for root device
	[Sep14 17:01] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.055346] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.100862] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.088040] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.098728] systemd-fstab-generator[5384]: Ignoring "noauto" option for root device
	[  +5.115214] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.342661] systemd-fstab-generator[6012]: Ignoring "noauto" option for root device
	[  +0.088898] systemd-fstab-generator[6024]: Ignoring "noauto" option for root device
	[  +0.068400] systemd-fstab-generator[6036]: Ignoring "noauto" option for root device
	[  +0.084045] systemd-fstab-generator[6051]: Ignoring "noauto" option for root device
	[  +0.204994] systemd-fstab-generator[6217]: Ignoring "noauto" option for root device
	[  +1.092133] systemd-fstab-generator[6340]: Ignoring "noauto" option for root device
	[  +1.228542] kauditd_printk_skb: 189 callbacks suppressed
	[  +5.600969] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.299603] systemd-fstab-generator[7348]: Ignoring "noauto" option for root device
	[  +6.437871] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.575492] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.449061] kauditd_printk_skb: 27 callbacks suppressed
	[Sep14 17:02] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.181554] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.086388] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [8fbb4c52c0ad] <==
	{"level":"info","ts":"2024-09-14T17:00:14.847024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:00:14.847087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-14T17:00:14.847124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T17:00:14.847181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-14T17:00:14.847303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T17:00:14.847434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-14T17:00:14.852631Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-855000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:00:14.852929Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:00:14.853280Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:00:14.853338Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:00:14.853378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:00:14.855147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:00:14.855147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:00:14.857089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:00:14.858496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-14T17:01:03.113147Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T17:01:03.113173Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-855000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-14T17:01:03.113207Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:01:03.113269Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:01:03.123545Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:01:03.123582Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T17:01:03.125926Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-14T17:01:03.128099Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-14T17:01:03.128170Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-14T17:01:03.128175Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-855000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [fef63d3cb8ae] <==
	{"level":"info","ts":"2024-09-14T17:01:18.139734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-14T17:01:18.139790Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:01:18.139821Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:01:18.140922Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:01:18.141631Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:01:18.141688Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-14T17:01:18.141747Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-14T17:01:18.142353Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:01:18.142750Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:01:19.738242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-14T17:01:19.738393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-14T17:01:19.738474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-14T17:01:19.738514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-14T17:01:19.738543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-14T17:01:19.738573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-14T17:01:19.738593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-14T17:01:19.740837Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-855000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:01:19.740931Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:01:19.741778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:01:19.743450Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:01:19.744926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:01:19.745045Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:01:19.745628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:01:19.743472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:01:19.747580Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 17:02:32 up 3 min,  0 users,  load average: 1.21, 0.64, 0.26
	Linux functional-855000 5.10.207 #1 SMP PREEMPT Sat Sep 14 04:33:12 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dd13db8ee268] <==
	I0914 17:01:20.348581       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 17:01:20.348944       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 17:01:20.365382       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 17:01:20.365400       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 17:01:20.365446       1 aggregator.go:171] initial CRD sync complete...
	I0914 17:01:20.365454       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 17:01:20.365457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 17:01:20.365459       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:01:20.377691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:01:20.377706       1 policy_source.go:224] refreshing policies
	I0914 17:01:20.396995       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 17:01:21.248184       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 17:01:21.349407       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0914 17:01:21.350097       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 17:01:21.351588       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:01:21.695286       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 17:01:21.699136       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 17:01:21.710495       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 17:01:21.718020       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 17:01:21.720038       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:01:42.311837       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.124.37"}
	I0914 17:01:47.952808       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0914 17:01:47.998160       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.242.31"}
	I0914 17:01:53.547407       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.228.85"}
	I0914 17:02:03.983755       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.8.157"}
	
	
	==> kube-controller-manager [17fba9077db5] <==
	I0914 17:00:18.736291       1 shared_informer.go:320] Caches are synced for expand
	I0914 17:00:18.736295       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 17:00:18.736308       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 17:00:18.736312       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0914 17:00:18.736315       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 17:00:18.736350       1 shared_informer.go:320] Caches are synced for taint
	I0914 17:00:18.736645       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0914 17:00:18.736707       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-855000"
	I0914 17:00:18.736738       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0914 17:00:18.737985       1 shared_informer.go:320] Caches are synced for stateful set
	I0914 17:00:18.739361       1 shared_informer.go:320] Caches are synced for namespace
	I0914 17:00:18.815185       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0914 17:00:18.887348       1 shared_informer.go:320] Caches are synced for crt configmap
	I0914 17:00:18.915512       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0914 17:00:18.920757       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 17:00:18.937603       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0914 17:00:18.939056       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 17:00:18.940551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="218.933967ms"
	I0914 17:00:18.940654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.865µs"
	I0914 17:00:18.988042       1 shared_informer.go:320] Caches are synced for endpoint
	I0914 17:00:19.373102       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 17:00:19.385646       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 17:00:19.385682       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0914 17:00:49.478208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.285767ms"
	I0914 17:00:49.481121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.742µs"
	
	
	==> kube-controller-manager [34362d414140] <==
	I0914 17:01:23.893864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.123µs"
	I0914 17:01:24.257862       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 17:01:24.338529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 17:01:24.338565       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0914 17:01:24.518383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.273112ms"
	I0914 17:01:24.519148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.581µs"
	I0914 17:01:47.963150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.447593ms"
	I0914 17:01:47.967447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="3.898969ms"
	I0914 17:01:47.967512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="12.707µs"
	I0914 17:01:47.970436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.54µs"
	I0914 17:01:53.619265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.123µs"
	I0914 17:01:54.654446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="39.122µs"
	I0914 17:01:55.655958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.123µs"
	I0914 17:02:03.950285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="6.933265ms"
	I0914 17:02:03.954539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.135911ms"
	I0914 17:02:03.954707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.789µs"
	I0914 17:02:03.955649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.333µs"
	I0914 17:02:04.788250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.373µs"
	I0914 17:02:05.809065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="36.789µs"
	I0914 17:02:10.890121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.664µs"
	I0914 17:02:17.135944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.54µs"
	I0914 17:02:18.041071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="37.663µs"
	I0914 17:02:21.207114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-855000"
	I0914 17:02:24.146146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="52.245µs"
	I0914 17:02:29.159752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="81.702µs"
	
	
	==> kube-proxy [82b3e6d0388c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:00:16.667514       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:00:16.672202       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0914 17:00:16.672293       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:00:16.687590       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:00:16.687613       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:00:16.687636       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:00:16.689060       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:00:16.689156       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:00:16.689161       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:00:16.689949       1 config.go:199] "Starting service config controller"
	I0914 17:00:16.690058       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:00:16.690088       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:00:16.690103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:00:16.690263       1 config.go:328] "Starting node config controller"
	I0914 17:00:16.690358       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:00:16.791221       1 shared_informer.go:320] Caches are synced for node config
	I0914 17:00:16.791221       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:00:16.791242       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [82c6ce49f985] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:01:21.629383       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:01:21.632857       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0914 17:01:21.632881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:01:21.643677       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:01:21.643701       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:01:21.643715       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:01:21.644361       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:01:21.644481       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:01:21.644492       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:01:21.644939       1 config.go:199] "Starting service config controller"
	I0914 17:01:21.644952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:01:21.644962       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:01:21.644964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:01:21.645159       1 config.go:328] "Starting node config controller"
	I0914 17:01:21.645194       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:01:21.745267       1 shared_informer.go:320] Caches are synced for node config
	I0914 17:01:21.745267       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:01:21.745282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32853c63e221] <==
	I0914 17:00:13.433889       1 serving.go:386] Generated self-signed cert in-memory
	W0914 17:00:15.355241       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 17:00:15.355350       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:00:15.355390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 17:00:15.355410       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 17:00:15.411250       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 17:00:15.411269       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:00:15.412203       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 17:00:15.412274       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 17:00:15.412286       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 17:00:15.412296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 17:00:15.512666       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 17:01:03.092309       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b1c97afe4afd] <==
	I0914 17:01:18.380144       1 serving.go:386] Generated self-signed cert in-memory
	W0914 17:01:20.278193       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 17:01:20.278209       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:01:20.278214       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 17:01:20.278227       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 17:01:20.311004       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 17:01:20.311067       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:01:20.314819       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 17:01:20.315026       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 17:01:20.316086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 17:01:20.316099       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 17:01:20.416746       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 17:02:17 functional-855000 kubelet[6347]: E0914 17:02:17.135707    6347 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:02:17 functional-855000 kubelet[6347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:02:17 functional-855000 kubelet[6347]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:02:17 functional-855000 kubelet[6347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:02:17 functional-855000 kubelet[6347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:02:17 functional-855000 kubelet[6347]: I0914 17:02:17.193688    6347 scope.go:117] "RemoveContainer" containerID="a867ce382c1a87a0b1b22d9966f677e8c9142073ebf56b9e33a07d3e2798ef00"
	Sep 14 17:02:17 functional-855000 kubelet[6347]: I0914 17:02:17.295735    6347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn6fg\" (UniqueName: \"kubernetes.io/projected/994599e7-cc13-49dc-95c2-496cde65b686-kube-api-access-qn6fg\") pod \"sp-pod\" (UID: \"994599e7-cc13-49dc-95c2-496cde65b686\") " pod="default/sp-pod"
	Sep 14 17:02:17 functional-855000 kubelet[6347]: I0914 17:02:17.295756    6347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3\" (UniqueName: \"kubernetes.io/host-path/994599e7-cc13-49dc-95c2-496cde65b686-pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3\") pod \"sp-pod\" (UID: \"994599e7-cc13-49dc-95c2-496cde65b686\") " pod="default/sp-pod"
	Sep 14 17:02:18 functional-855000 kubelet[6347]: I0914 17:02:18.030653    6347 scope.go:117] "RemoveContainer" containerID="9ced6ea257708f5309c63cba7bc3ff9799389ffdaf4d2465a7bc525a60ce1b12"
	Sep 14 17:02:18 functional-855000 kubelet[6347]: I0914 17:02:18.030938    6347 scope.go:117] "RemoveContainer" containerID="cdbdee5af3fb3e8a38e4a0eb6b80018ab28df112d7363b770e7ce4e5a68286a0"
	Sep 14 17:02:18 functional-855000 kubelet[6347]: E0914 17:02:18.031062    6347 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-xsmqz_default(3532b3c7-e16c-4829-b380-324d0abedf3d)\"" pod="default/hello-node-connect-65d86f57f4-xsmqz" podUID="3532b3c7-e16c-4829-b380-324d0abedf3d"
	Sep 14 17:02:19 functional-855000 kubelet[6347]: I0914 17:02:19.093767    6347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.359882454 podStartE2EDuration="2.093750875s" podCreationTimestamp="2024-09-14 17:02:17 +0000 UTC" firstStartedPulling="2024-09-14 17:02:17.506399674 +0000 UTC m=+60.444042935" lastFinishedPulling="2024-09-14 17:02:18.240268094 +0000 UTC m=+61.177911356" observedRunningTime="2024-09-14 17:02:19.09358143 +0000 UTC m=+62.031224692" watchObservedRunningTime="2024-09-14 17:02:19.093750875 +0000 UTC m=+62.031394136"
	Sep 14 17:02:24 functional-855000 kubelet[6347]: I0914 17:02:24.127952    6347 scope.go:117] "RemoveContainer" containerID="eb35fab72bc8ad90bc1befe44adaea819a16b71481934755baa60e7624af0838"
	Sep 14 17:02:24 functional-855000 kubelet[6347]: E0914 17:02:24.129595    6347 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-ppc4j_default(9727cc52-2450-4742-93a6-757c76565324)\"" pod="default/hello-node-64b4f8f9ff-ppc4j" podUID="9727cc52-2450-4742-93a6-757c76565324"
	Sep 14 17:02:25 functional-855000 kubelet[6347]: I0914 17:02:25.778411    6347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpgn\" (UniqueName: \"kubernetes.io/projected/37c45988-666f-4e1a-81ef-e9c9f9ca9988-kube-api-access-2xpgn\") pod \"busybox-mount\" (UID: \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\") " pod="default/busybox-mount"
	Sep 14 17:02:25 functional-855000 kubelet[6347]: I0914 17:02:25.778480    6347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/37c45988-666f-4e1a-81ef-e9c9f9ca9988-test-volume\") pod \"busybox-mount\" (UID: \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\") " pod="default/busybox-mount"
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.130400    6347 scope.go:117] "RemoveContainer" containerID="cdbdee5af3fb3e8a38e4a0eb6b80018ab28df112d7363b770e7ce4e5a68286a0"
	Sep 14 17:02:29 functional-855000 kubelet[6347]: E0914 17:02:29.134244    6347 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-xsmqz_default(3532b3c7-e16c-4829-b380-324d0abedf3d)\"" pod="default/hello-node-connect-65d86f57f4-xsmqz" podUID="3532b3c7-e16c-4829-b380-324d0abedf3d"
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.511002    6347 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpgn\" (UniqueName: \"kubernetes.io/projected/37c45988-666f-4e1a-81ef-e9c9f9ca9988-kube-api-access-2xpgn\") pod \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\" (UID: \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\") "
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.511035    6347 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/37c45988-666f-4e1a-81ef-e9c9f9ca9988-test-volume\") pod \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\" (UID: \"37c45988-666f-4e1a-81ef-e9c9f9ca9988\") "
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.511083    6347 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37c45988-666f-4e1a-81ef-e9c9f9ca9988-test-volume" (OuterVolumeSpecName: "test-volume") pod "37c45988-666f-4e1a-81ef-e9c9f9ca9988" (UID: "37c45988-666f-4e1a-81ef-e9c9f9ca9988"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.515387    6347 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c45988-666f-4e1a-81ef-e9c9f9ca9988-kube-api-access-2xpgn" (OuterVolumeSpecName: "kube-api-access-2xpgn") pod "37c45988-666f-4e1a-81ef-e9c9f9ca9988" (UID: "37c45988-666f-4e1a-81ef-e9c9f9ca9988"). InnerVolumeSpecName "kube-api-access-2xpgn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.611298    6347 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/37c45988-666f-4e1a-81ef-e9c9f9ca9988-test-volume\") on node \"functional-855000\" DevicePath \"\""
	Sep 14 17:02:29 functional-855000 kubelet[6347]: I0914 17:02:29.611326    6347 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2xpgn\" (UniqueName: \"kubernetes.io/projected/37c45988-666f-4e1a-81ef-e9c9f9ca9988-kube-api-access-2xpgn\") on node \"functional-855000\" DevicePath \"\""
	Sep 14 17:02:30 functional-855000 kubelet[6347]: I0914 17:02:30.241014    6347 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c4bf24f3b853f4db96c25e99135fd704165624431c42f6b88c19fad668637d"
	
	
	==> storage-provisioner [0a69a48c0646] <==
	I0914 17:01:21.585688       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 17:01:21.598720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 17:01:21.598743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 17:01:39.021080       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 17:01:39.021325       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-855000_bc0f5263-6178-4558-b73d-cb4caec0d225!
	I0914 17:01:39.022050       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91045746-a536-4e67-b515-a165c53020ee", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-855000_bc0f5263-6178-4558-b73d-cb4caec0d225 became leader
	I0914 17:01:39.124949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-855000_bc0f5263-6178-4558-b73d-cb4caec0d225!
	I0914 17:02:04.580894       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0914 17:02:04.581033       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    943be3bf-73a7-4bc1-a5f8-7539d2d0b789 313 0 2024-09-14 16:59:46 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-14 16:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3 751 0 2024-09-14 17:02:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-14 17:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-14 17:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0914 17:02:04.581495       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3" provisioned
	I0914 17:02:04.581566       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0914 17:02:04.581857       1 volume_store.go:212] Trying to save persistentvolume "pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3"
	I0914 17:02:04.581839       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0914 17:02:04.586176       1 volume_store.go:219] persistentvolume "pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3" saved
	I0914 17:02:04.587480       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-01ddbcf3-67bf-4a0e-a8ba-b5dd5fc969c3
	
	
	==> storage-provisioner [883cf406c691] <==
	I0914 17:00:30.151499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 17:00:30.155272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 17:00:30.155294       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 17:00:30.158190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 17:00:30.158244       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-855000_0b14d970-2936-4236-901c-68efdf142db9!
	I0914 17:00:30.158573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91045746-a536-4e67-b515-a165c53020ee", APIVersion:"v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-855000_0b14d970-2936-4236-901c-68efdf142db9 became leader
	I0914 17:00:30.258735       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-855000_0b14d970-2936-4236-901c-68efdf142db9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-855000 -n functional-855000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-855000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-855000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-855000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-855000/192.168.105.4
	Start Time:       Sat, 14 Sep 2024 10:02:25 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://59e07d734576d1232fbf4f326b32f743378799ca62d2683339bccc5a3ede020f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 14 Sep 2024 10:02:27 -0700
	      Finished:     Sat, 14 Sep 2024 10:02:27 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xpgn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2xpgn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-855000
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.408s (1.408s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 node stop m02 -v=7 --alsologtostderr
E0914 10:06:47.917322    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:47.924905    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:47.938051    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:47.961386    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:48.004722    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:48.087120    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:48.250511    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:48.573920    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:49.217345    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:50.500705    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:06:53.064004    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-258000 node stop m02 -v=7 --alsologtostderr: (12.186189667s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
E0914 10:06:58.185312    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:07:04.456453    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:07:08.428508    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:07:28.910284    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:08:09.872071    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:09:31.792406    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr: exit status 7 (2m55.965327375s)

                                                
                                                
-- stdout --
	ha-258000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-258000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-258000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:06:58.131180    3168 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:06:58.131370    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:06:58.131374    3168 out.go:358] Setting ErrFile to fd 2...
	I0914 10:06:58.131376    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:06:58.131536    3168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:06:58.131661    3168 out.go:352] Setting JSON to false
	I0914 10:06:58.131674    3168 mustload.go:65] Loading cluster: ha-258000
	I0914 10:06:58.131704    3168 notify.go:220] Checking for updates...
	I0914 10:06:58.131924    3168 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:06:58.131932    3168 status.go:255] checking status of ha-258000 ...
	I0914 10:06:58.132734    3168 status.go:330] ha-258000 host status = "Running" (err=<nil>)
	I0914 10:06:58.132744    3168 host.go:66] Checking if "ha-258000" exists ...
	I0914 10:06:58.132863    3168 host.go:66] Checking if "ha-258000" exists ...
	I0914 10:06:58.132984    3168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:06:58.132992    3168 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/id_rsa Username:docker}
	W0914 10:07:24.052085    3168 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0914 10:07:24.052230    3168 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0914 10:07:24.052276    3168 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0914 10:07:24.052286    3168 status.go:257] ha-258000 status: &{Name:ha-258000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:07:24.052307    3168 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0914 10:07:24.052317    3168 status.go:255] checking status of ha-258000-m02 ...
	I0914 10:07:24.052586    3168 status.go:330] ha-258000-m02 host status = "Stopped" (err=<nil>)
	I0914 10:07:24.052598    3168 status.go:343] host is not running, skipping remaining checks
	I0914 10:07:24.052601    3168 status.go:257] ha-258000-m02 status: &{Name:ha-258000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:07:24.052610    3168 status.go:255] checking status of ha-258000-m03 ...
	I0914 10:07:24.053315    3168 status.go:330] ha-258000-m03 host status = "Running" (err=<nil>)
	I0914 10:07:24.053322    3168 host.go:66] Checking if "ha-258000-m03" exists ...
	I0914 10:07:24.053421    3168 host.go:66] Checking if "ha-258000-m03" exists ...
	I0914 10:07:24.053540    3168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:07:24.053552    3168 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m03/id_rsa Username:docker}
	W0914 10:08:39.053461    3168 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0914 10:08:39.053504    3168 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0914 10:08:39.053512    3168 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0914 10:08:39.053516    3168 status.go:257] ha-258000-m03 status: &{Name:ha-258000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:08:39.053526    3168 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0914 10:08:39.053530    3168 status.go:255] checking status of ha-258000-m04 ...
	I0914 10:08:39.054234    3168 status.go:330] ha-258000-m04 host status = "Running" (err=<nil>)
	I0914 10:08:39.054241    3168 host.go:66] Checking if "ha-258000-m04" exists ...
	I0914 10:08:39.054330    3168 host.go:66] Checking if "ha-258000-m04" exists ...
	I0914 10:08:39.054440    3168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:08:39.054449    3168 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m04/id_rsa Username:docker}
	W0914 10:09:54.052763    3168 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0914 10:09:54.052807    3168 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0914 10:09:54.052816    3168 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0914 10:09:54.052820    3168 status.go:257] ha-258000-m04 status: &{Name:ha-258000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:09:54.052829    3168 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-258000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 3 (25.965359958s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 10:10:20.017561    3227 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0914 10:10:20.017568    3227 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0914 10:11:36.721847    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.0490525s)
ha_test.go:413: expected profile "ha-258000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-258000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-258000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-258000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
E0914 10:11:47.911502    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 3 (25.9609785s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 10:12:03.026653    3269 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0914 10:12:03.026689    3269 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (183.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.130375333s)

                                                
                                                
-- stdout --
	* Starting "ha-258000-m02" control-plane node in "ha-258000" cluster
	* Restarting existing qemu2 VM for "ha-258000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-258000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:12:03.092448    3280 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:12:03.092760    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:12:03.092766    3280 out.go:358] Setting ErrFile to fd 2...
	I0914 10:12:03.092769    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:12:03.092939    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:12:03.093238    3280 mustload.go:65] Loading cluster: ha-258000
	I0914 10:12:03.093573    3280 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0914 10:12:03.093879    3280 host.go:58] "ha-258000-m02" host status: Stopped
	I0914 10:12:03.098235    3280 out.go:177] * Starting "ha-258000-m02" control-plane node in "ha-258000" cluster
	I0914 10:12:03.102061    3280 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:12:03.102075    3280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:12:03.102081    3280 cache.go:56] Caching tarball of preloaded images
	I0914 10:12:03.102161    3280 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:12:03.102168    3280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:12:03.102239    3280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/ha-258000/config.json ...
	I0914 10:12:03.102611    3280 start.go:360] acquireMachinesLock for ha-258000-m02: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:12:03.102678    3280 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "ha-258000-m02"
	I0914 10:12:03.102690    3280 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:12:03.102696    3280 fix.go:54] fixHost starting: m02
	I0914 10:12:03.102810    3280 fix.go:112] recreateIfNeeded on ha-258000-m02: state=Stopped err=<nil>
	W0914 10:12:03.102816    3280 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:12:03.106270    3280 out.go:177] * Restarting existing qemu2 VM for "ha-258000-m02" ...
	I0914 10:12:03.110327    3280 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:12:03.110382    3280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:5a:38:49:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/disk.qcow2
	I0914 10:12:03.112861    3280 main.go:141] libmachine: STDOUT: 
	I0914 10:12:03.112886    3280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:12:03.112917    3280 fix.go:56] duration metric: took 10.219834ms for fixHost
	I0914 10:12:03.112924    3280 start.go:83] releasing machines lock for "ha-258000-m02", held for 10.239417ms
	W0914 10:12:03.112929    3280 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:12:03.112961    3280 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:12:03.112966    3280 start.go:729] Will try again in 5 seconds ...
	I0914 10:12:08.114965    3280 start.go:360] acquireMachinesLock for ha-258000-m02: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:12:08.115501    3280 start.go:364] duration metric: took 460.167µs to acquireMachinesLock for "ha-258000-m02"
	I0914 10:12:08.115660    3280 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:12:08.115680    3280 fix.go:54] fixHost starting: m02
	I0914 10:12:08.116480    3280 fix.go:112] recreateIfNeeded on ha-258000-m02: state=Stopped err=<nil>
	W0914 10:12:08.116506    3280 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:12:08.120802    3280 out.go:177] * Restarting existing qemu2 VM for "ha-258000-m02" ...
	I0914 10:12:08.123869    3280 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:12:08.124058    3280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:5a:38:49:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/disk.qcow2
	I0914 10:12:08.133571    3280 main.go:141] libmachine: STDOUT: 
	I0914 10:12:08.133641    3280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:12:08.133727    3280 fix.go:56] duration metric: took 18.048916ms for fixHost
	I0914 10:12:08.133747    3280 start.go:83] releasing machines lock for "ha-258000-m02", held for 18.223708ms
	W0914 10:12:08.133922    3280 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:12:08.138656    3280 out.go:201] 
	W0914 10:12:08.142962    3280 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:12:08.142984    3280 out.go:270] * 
	* 
	W0914 10:12:08.149702    3280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:12:08.153823    3280 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0914 10:12:03.092448    3280 out.go:345] Setting OutFile to fd 1 ...
I0914 10:12:03.092760    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:12:03.092766    3280 out.go:358] Setting ErrFile to fd 2...
I0914 10:12:03.092769    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:12:03.092939    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:12:03.093238    3280 mustload.go:65] Loading cluster: ha-258000
I0914 10:12:03.093573    3280 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0914 10:12:03.093879    3280 host.go:58] "ha-258000-m02" host status: Stopped
I0914 10:12:03.098235    3280 out.go:177] * Starting "ha-258000-m02" control-plane node in "ha-258000" cluster
I0914 10:12:03.102061    3280 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0914 10:12:03.102075    3280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0914 10:12:03.102081    3280 cache.go:56] Caching tarball of preloaded images
I0914 10:12:03.102161    3280 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0914 10:12:03.102168    3280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0914 10:12:03.102239    3280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/ha-258000/config.json ...
I0914 10:12:03.102611    3280 start.go:360] acquireMachinesLock for ha-258000-m02: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 10:12:03.102678    3280 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "ha-258000-m02"
I0914 10:12:03.102690    3280 start.go:96] Skipping create...Using existing machine configuration
I0914 10:12:03.102696    3280 fix.go:54] fixHost starting: m02
I0914 10:12:03.102810    3280 fix.go:112] recreateIfNeeded on ha-258000-m02: state=Stopped err=<nil>
W0914 10:12:03.102816    3280 fix.go:138] unexpected machine state, will restart: <nil>
I0914 10:12:03.106270    3280 out.go:177] * Restarting existing qemu2 VM for "ha-258000-m02" ...
I0914 10:12:03.110327    3280 qemu.go:418] Using hvf for hardware acceleration
I0914 10:12:03.110382    3280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:5a:38:49:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/disk.qcow2
I0914 10:12:03.112861    3280 main.go:141] libmachine: STDOUT: 
I0914 10:12:03.112886    3280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 10:12:03.112917    3280 fix.go:56] duration metric: took 10.219834ms for fixHost
I0914 10:12:03.112924    3280 start.go:83] releasing machines lock for "ha-258000-m02", held for 10.239417ms
W0914 10:12:03.112929    3280 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 10:12:03.112961    3280 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 10:12:03.112966    3280 start.go:729] Will try again in 5 seconds ...
I0914 10:12:08.114965    3280 start.go:360] acquireMachinesLock for ha-258000-m02: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 10:12:08.115501    3280 start.go:364] duration metric: took 460.167µs to acquireMachinesLock for "ha-258000-m02"
I0914 10:12:08.115660    3280 start.go:96] Skipping create...Using existing machine configuration
I0914 10:12:08.115680    3280 fix.go:54] fixHost starting: m02
I0914 10:12:08.116480    3280 fix.go:112] recreateIfNeeded on ha-258000-m02: state=Stopped err=<nil>
W0914 10:12:08.116506    3280 fix.go:138] unexpected machine state, will restart: <nil>
I0914 10:12:08.120802    3280 out.go:177] * Restarting existing qemu2 VM for "ha-258000-m02" ...
I0914 10:12:08.123869    3280 qemu.go:418] Using hvf for hardware acceleration
I0914 10:12:08.124058    3280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:5a:38:49:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m02/disk.qcow2
I0914 10:12:08.133571    3280 main.go:141] libmachine: STDOUT: 
I0914 10:12:08.133641    3280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0914 10:12:08.133727    3280 fix.go:56] duration metric: took 18.048916ms for fixHost
I0914 10:12:08.133747    3280 start.go:83] releasing machines lock for "ha-258000-m02", held for 18.223708ms
W0914 10:12:08.133922    3280 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0914 10:12:08.138656    3280 out.go:201] 
W0914 10:12:08.142962    3280 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0914 10:12:08.142984    3280 out.go:270] * 
* 
W0914 10:12:08.149702    3280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 10:12:08.153823    3280 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-258000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
E0914 10:12:15.635660    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr: exit status 7 (2m32.698661083s)

                                                
                                                
-- stdout --
	ha-258000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-258000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-258000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:12:08.225388    3284 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:12:08.225590    3284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:12:08.225598    3284 out.go:358] Setting ErrFile to fd 2...
	I0914 10:12:08.225602    3284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:12:08.225796    3284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:12:08.225943    3284 out.go:352] Setting JSON to false
	I0914 10:12:08.225956    3284 mustload.go:65] Loading cluster: ha-258000
	I0914 10:12:08.225998    3284 notify.go:220] Checking for updates...
	I0914 10:12:08.226253    3284 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:12:08.226261    3284 status.go:255] checking status of ha-258000 ...
	I0914 10:12:08.227221    3284 status.go:330] ha-258000 host status = "Running" (err=<nil>)
	I0914 10:12:08.227237    3284 host.go:66] Checking if "ha-258000" exists ...
	I0914 10:12:08.227385    3284 host.go:66] Checking if "ha-258000" exists ...
	I0914 10:12:08.227527    3284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:12:08.227536    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/id_rsa Username:docker}
	W0914 10:12:08.227744    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:08.227760    3284 retry.go:31] will retry after 240.319286ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:08.470416    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:08.470475    3284 retry.go:31] will retry after 209.235689ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:08.682028    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:08.682082    3284 retry.go:31] will retry after 591.066576ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:09.275743    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:09.275938    3284 retry.go:31] will retry after 142.564916ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:09.419238    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/id_rsa Username:docker}
	W0914 10:12:09.420415    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:09.420464    3284 retry.go:31] will retry after 346.014677ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:09.768924    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:09.769001    3284 retry.go:31] will retry after 255.980422ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:10.027499    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:10.027581    3284 retry.go:31] will retry after 823.656568ms: dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:10.851958    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0914 10:12:10.852008    3284 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0914 10:12:10.852016    3284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:10.852020    3284 status.go:257] ha-258000 status: &{Name:ha-258000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:12:10.852030    3284 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0914 10:12:10.852034    3284 status.go:255] checking status of ha-258000-m02 ...
	I0914 10:12:10.852212    3284 status.go:330] ha-258000-m02 host status = "Stopped" (err=<nil>)
	I0914 10:12:10.852217    3284 status.go:343] host is not running, skipping remaining checks
	I0914 10:12:10.852219    3284 status.go:257] ha-258000-m02 status: &{Name:ha-258000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:12:10.852229    3284 status.go:255] checking status of ha-258000-m03 ...
	I0914 10:12:10.852806    3284 status.go:330] ha-258000-m03 host status = "Running" (err=<nil>)
	I0914 10:12:10.852813    3284 host.go:66] Checking if "ha-258000-m03" exists ...
	I0914 10:12:10.852924    3284 host.go:66] Checking if "ha-258000-m03" exists ...
	I0914 10:12:10.853069    3284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:12:10.853075    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m03/id_rsa Username:docker}
	W0914 10:13:25.852145    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0914 10:13:25.852331    3284 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0914 10:13:25.852364    3284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0914 10:13:25.852382    3284 status.go:257] ha-258000-m03 status: &{Name:ha-258000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:13:25.852423    3284 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0914 10:13:25.852443    3284 status.go:255] checking status of ha-258000-m04 ...
	I0914 10:13:25.855532    3284 status.go:330] ha-258000-m04 host status = "Running" (err=<nil>)
	I0914 10:13:25.855561    3284 host.go:66] Checking if "ha-258000-m04" exists ...
	I0914 10:13:25.856130    3284 host.go:66] Checking if "ha-258000-m04" exists ...
	I0914 10:13:25.856746    3284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 10:13:25.856776    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000-m04/id_rsa Username:docker}
	W0914 10:14:40.854877    3284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0914 10:14:40.854924    3284 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0914 10:14:40.854933    3284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0914 10:14:40.854937    3284 status.go:257] ha-258000-m04 status: &{Name:ha-258000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0914 10:14:40.854945    3284 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 3 (25.966257583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 10:15:06.816324    3350 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0914 10:15:06.816362    3350 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (183.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-258000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-258000 -v=7 --alsologtostderr
E0914 10:16:36.711020    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:16:47.900621    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:17:59.801548    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-258000 -v=7 --alsologtostderr: (3m49.020978292s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-258000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-258000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226791167s)

                                                
                                                
-- stdout --
	* [ha-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-258000" primary control-plane node in "ha-258000" cluster
	* Restarting existing qemu2 VM for "ha-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:20:14.068097    3508 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:20:14.068269    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:14.068273    3508 out.go:358] Setting ErrFile to fd 2...
	I0914 10:20:14.068277    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:14.068443    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:20:14.069731    3508 out.go:352] Setting JSON to false
	I0914 10:20:14.090176    3508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2977,"bootTime":1726331437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:20:14.090255    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:20:14.095006    3508 out.go:177] * [ha-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:20:14.103118    3508 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:20:14.103142    3508 notify.go:220] Checking for updates...
	I0914 10:20:14.110168    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:20:14.113049    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:20:14.116121    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:20:14.119135    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:20:14.120430    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:20:14.123428    3508 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:20:14.123479    3508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:20:14.128095    3508 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:20:14.133128    3508 start.go:297] selected driver: qemu2
	I0914 10:20:14.133135    3508 start.go:901] validating driver "qemu2" against &{Name:ha-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-258000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:20:14.133226    3508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:20:14.135870    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:20:14.135898    3508 cni.go:84] Creating CNI manager for ""
	I0914 10:20:14.135927    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 10:20:14.135978    3508 start.go:340] cluster config:
	{Name:ha-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-258000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:20:14.140084    3508 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:20:14.148113    3508 out.go:177] * Starting "ha-258000" primary control-plane node in "ha-258000" cluster
	I0914 10:20:14.152157    3508 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:20:14.152171    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:20:14.152181    3508 cache.go:56] Caching tarball of preloaded images
	I0914 10:20:14.152242    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:20:14.152248    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:20:14.152312    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/ha-258000/config.json ...
	I0914 10:20:14.152810    3508 start.go:360] acquireMachinesLock for ha-258000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:20:14.152846    3508 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "ha-258000"
	I0914 10:20:14.152855    3508 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:20:14.152861    3508 fix.go:54] fixHost starting: 
	I0914 10:20:14.152985    3508 fix.go:112] recreateIfNeeded on ha-258000: state=Stopped err=<nil>
	W0914 10:20:14.152994    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:20:14.156055    3508 out.go:177] * Restarting existing qemu2 VM for "ha-258000" ...
	I0914 10:20:14.164151    3508 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:20:14.164196    3508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:42:5a:cd:0c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/disk.qcow2
	I0914 10:20:14.166386    3508 main.go:141] libmachine: STDOUT: 
	I0914 10:20:14.166406    3508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:20:14.166439    3508 fix.go:56] duration metric: took 13.577667ms for fixHost
	I0914 10:20:14.166445    3508 start.go:83] releasing machines lock for "ha-258000", held for 13.59475ms
	W0914 10:20:14.166452    3508 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:20:14.166494    3508 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:20:14.166499    3508 start.go:729] Will try again in 5 seconds ...
	I0914 10:20:19.168564    3508 start.go:360] acquireMachinesLock for ha-258000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:20:19.169007    3508 start.go:364] duration metric: took 342.167µs to acquireMachinesLock for "ha-258000"
	I0914 10:20:19.169144    3508 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:20:19.169166    3508 fix.go:54] fixHost starting: 
	I0914 10:20:19.169984    3508 fix.go:112] recreateIfNeeded on ha-258000: state=Stopped err=<nil>
	W0914 10:20:19.170010    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:20:19.178549    3508 out.go:177] * Restarting existing qemu2 VM for "ha-258000" ...
	I0914 10:20:19.182286    3508 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:20:19.182526    3508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:42:5a:cd:0c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/disk.qcow2
	I0914 10:20:19.192179    3508 main.go:141] libmachine: STDOUT: 
	I0914 10:20:19.192246    3508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:20:19.192334    3508 fix.go:56] duration metric: took 23.17175ms for fixHost
	I0914 10:20:19.192358    3508 start.go:83] releasing machines lock for "ha-258000", held for 23.325167ms
	W0914 10:20:19.192544    3508 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:20:19.200459    3508 out.go:201] 
	W0914 10:20:19.204483    3508 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:20:19.204502    3508 out.go:270] * 
	* 
	W0914 10:20:19.207088    3508 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:20:19.214490    3508 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-258000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-258000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (33.3845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.898416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-258000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-258000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:20:19.361933    3524 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:20:19.362152    3524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:19.362156    3524 out.go:358] Setting ErrFile to fd 2...
	I0914 10:20:19.362159    3524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:19.362308    3524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:20:19.362543    3524 mustload.go:65] Loading cluster: ha-258000
	I0914 10:20:19.362782    3524 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0914 10:20:19.363091    3524 out.go:270] ! The control-plane node ha-258000 host is not running (will try others): state=Stopped
	! The control-plane node ha-258000 host is not running (will try others): state=Stopped
	W0914 10:20:19.363201    3524 out.go:270] ! The control-plane node ha-258000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-258000-m02 host is not running (will try others): state=Stopped
	I0914 10:20:19.367981    3524 out.go:177] * The control-plane node ha-258000-m03 host is not running: state=Stopped
	I0914 10:20:19.370890    3524 out.go:177]   To start a cluster, run: "minikube start -p ha-258000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-258000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr: exit status 7 (30.523916ms)

                                                
                                                
-- stdout --
	ha-258000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:20:19.403238    3526 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:20:19.403393    3526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:19.403396    3526 out.go:358] Setting ErrFile to fd 2...
	I0914 10:20:19.403399    3526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:20:19.403526    3526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:20:19.403639    3526 out.go:352] Setting JSON to false
	I0914 10:20:19.403647    3526 mustload.go:65] Loading cluster: ha-258000
	I0914 10:20:19.403719    3526 notify.go:220] Checking for updates...
	I0914 10:20:19.403875    3526 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:20:19.403883    3526 status.go:255] checking status of ha-258000 ...
	I0914 10:20:19.404112    3526 status.go:330] ha-258000 host status = "Stopped" (err=<nil>)
	I0914 10:20:19.404115    3526 status.go:343] host is not running, skipping remaining checks
	I0914 10:20:19.404117    3526 status.go:257] ha-258000 status: &{Name:ha-258000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:20:19.404127    3526 status.go:255] checking status of ha-258000-m02 ...
	I0914 10:20:19.404213    3526 status.go:330] ha-258000-m02 host status = "Stopped" (err=<nil>)
	I0914 10:20:19.404216    3526 status.go:343] host is not running, skipping remaining checks
	I0914 10:20:19.404217    3526 status.go:257] ha-258000-m02 status: &{Name:ha-258000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:20:19.404221    3526 status.go:255] checking status of ha-258000-m03 ...
	I0914 10:20:19.404310    3526 status.go:330] ha-258000-m03 host status = "Stopped" (err=<nil>)
	I0914 10:20:19.404313    3526 status.go:343] host is not running, skipping remaining checks
	I0914 10:20:19.404316    3526 status.go:257] ha-258000-m03 status: &{Name:ha-258000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:20:19.404320    3526 status.go:255] checking status of ha-258000-m04 ...
	I0914 10:20:19.404412    3526 status.go:330] ha-258000-m04 host status = "Stopped" (err=<nil>)
	I0914 10:20:19.404415    3526 status.go:343] host is not running, skipping remaining checks
	I0914 10:20:19.404416    3526 status.go:257] ha-258000-m04 status: &{Name:ha-258000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (30.251708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-258000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-258000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-258000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-258000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (30.35025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 stop -v=7 --alsologtostderr
E0914 10:21:36.699757    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:21:47.889764    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:23:10.974513    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-258000 stop -v=7 --alsologtostderr: (3m21.983630292s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr: exit status 7 (69.348584ms)

                                                
                                                
-- stdout --
	ha-258000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-258000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:23:41.555185    3675 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:23:41.555393    3675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:41.555398    3675 out.go:358] Setting ErrFile to fd 2...
	I0914 10:23:41.555401    3675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:41.555571    3675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:23:41.555746    3675 out.go:352] Setting JSON to false
	I0914 10:23:41.555759    3675 mustload.go:65] Loading cluster: ha-258000
	I0914 10:23:41.555813    3675 notify.go:220] Checking for updates...
	I0914 10:23:41.556093    3675 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:23:41.556104    3675 status.go:255] checking status of ha-258000 ...
	I0914 10:23:41.556418    3675 status.go:330] ha-258000 host status = "Stopped" (err=<nil>)
	I0914 10:23:41.556423    3675 status.go:343] host is not running, skipping remaining checks
	I0914 10:23:41.556426    3675 status.go:257] ha-258000 status: &{Name:ha-258000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:23:41.556438    3675 status.go:255] checking status of ha-258000-m02 ...
	I0914 10:23:41.556557    3675 status.go:330] ha-258000-m02 host status = "Stopped" (err=<nil>)
	I0914 10:23:41.556560    3675 status.go:343] host is not running, skipping remaining checks
	I0914 10:23:41.556563    3675 status.go:257] ha-258000-m02 status: &{Name:ha-258000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:23:41.556568    3675 status.go:255] checking status of ha-258000-m03 ...
	I0914 10:23:41.556699    3675 status.go:330] ha-258000-m03 host status = "Stopped" (err=<nil>)
	I0914 10:23:41.556703    3675 status.go:343] host is not running, skipping remaining checks
	I0914 10:23:41.556705    3675 status.go:257] ha-258000-m03 status: &{Name:ha-258000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 10:23:41.556710    3675 status.go:255] checking status of ha-258000-m04 ...
	I0914 10:23:41.556838    3675 status.go:330] ha-258000-m04 host status = "Stopped" (err=<nil>)
	I0914 10:23:41.556842    3675 status.go:343] host is not running, skipping remaining checks
	I0914 10:23:41.556844    3675 status.go:257] ha-258000-m04 status: &{Name:ha-258000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr": ha-258000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-258000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (33.224125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-258000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-258000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186266459s)

                                                
                                                
-- stdout --
	* [ha-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-258000" primary control-plane node in "ha-258000" cluster
	* Restarting existing qemu2 VM for "ha-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-258000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:23:41.620118    3679 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:23:41.620241    3679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:41.620244    3679 out.go:358] Setting ErrFile to fd 2...
	I0914 10:23:41.620246    3679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:41.620396    3679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:23:41.621405    3679 out.go:352] Setting JSON to false
	I0914 10:23:41.637656    3679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3184,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:23:41.637735    3679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:23:41.642891    3679 out.go:177] * [ha-258000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:23:41.649824    3679 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:23:41.649858    3679 notify.go:220] Checking for updates...
	I0914 10:23:41.656792    3679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:23:41.659774    3679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:23:41.662779    3679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:23:41.664156    3679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:23:41.666764    3679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:23:41.670130    3679 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:23:41.670393    3679 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:23:41.674581    3679 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:23:41.681794    3679 start.go:297] selected driver: qemu2
	I0914 10:23:41.681801    3679 start.go:901] validating driver "qemu2" against &{Name:ha-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-258000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:23:41.681871    3679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:23:41.684213    3679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:23:41.684265    3679 cni.go:84] Creating CNI manager for ""
	I0914 10:23:41.684283    3679 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 10:23:41.684335    3679 start.go:340] cluster config:
	{Name:ha-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-258000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:23:41.687915    3679 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:23:41.695764    3679 out.go:177] * Starting "ha-258000" primary control-plane node in "ha-258000" cluster
	I0914 10:23:41.699853    3679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:23:41.699869    3679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:23:41.699884    3679 cache.go:56] Caching tarball of preloaded images
	I0914 10:23:41.699957    3679 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:23:41.699967    3679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:23:41.700035    3679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/ha-258000/config.json ...
	I0914 10:23:41.700499    3679 start.go:360] acquireMachinesLock for ha-258000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:23:41.700532    3679 start.go:364] duration metric: took 27.166µs to acquireMachinesLock for "ha-258000"
	I0914 10:23:41.700541    3679 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:23:41.700546    3679 fix.go:54] fixHost starting: 
	I0914 10:23:41.700661    3679 fix.go:112] recreateIfNeeded on ha-258000: state=Stopped err=<nil>
	W0914 10:23:41.700669    3679 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:23:41.705838    3679 out.go:177] * Restarting existing qemu2 VM for "ha-258000" ...
	I0914 10:23:41.713798    3679 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:23:41.713842    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:42:5a:cd:0c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/disk.qcow2
	I0914 10:23:41.715903    3679 main.go:141] libmachine: STDOUT: 
	I0914 10:23:41.715918    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:23:41.715949    3679 fix.go:56] duration metric: took 15.402709ms for fixHost
	I0914 10:23:41.715953    3679 start.go:83] releasing machines lock for "ha-258000", held for 15.416709ms
	W0914 10:23:41.715957    3679 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:23:41.715996    3679 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:23:41.716000    3679 start.go:729] Will try again in 5 seconds ...
	I0914 10:23:46.718004    3679 start.go:360] acquireMachinesLock for ha-258000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:23:46.718490    3679 start.go:364] duration metric: took 357.167µs to acquireMachinesLock for "ha-258000"
	I0914 10:23:46.718615    3679 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:23:46.718638    3679 fix.go:54] fixHost starting: 
	I0914 10:23:46.719360    3679 fix.go:112] recreateIfNeeded on ha-258000: state=Stopped err=<nil>
	W0914 10:23:46.719390    3679 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:23:46.731542    3679 out.go:177] * Restarting existing qemu2 VM for "ha-258000" ...
	I0914 10:23:46.735847    3679 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:23:46.736072    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:42:5a:cd:0c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/ha-258000/disk.qcow2
	I0914 10:23:46.744890    3679 main.go:141] libmachine: STDOUT: 
	I0914 10:23:46.744948    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:23:46.745019    3679 fix.go:56] duration metric: took 26.3855ms for fixHost
	I0914 10:23:46.745037    3679 start.go:83] releasing machines lock for "ha-258000", held for 26.528375ms
	W0914 10:23:46.745251    3679 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-258000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:23:46.752873    3679 out.go:201] 
	W0914 10:23:46.755905    3679 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:23:46.755930    3679 out.go:270] * 
	* 
	W0914 10:23:46.758659    3679 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:23:46.769917    3679 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-258000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (69.209458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-258000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-258000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-258000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-258000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (30.53625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-258000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-258000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.711791ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-258000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-258000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:23:46.957572    3699 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:23:46.957730    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:46.957733    3699 out.go:358] Setting ErrFile to fd 2...
	I0914 10:23:46.957736    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:23:46.957891    3699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:23:46.958112    3699 mustload.go:65] Loading cluster: ha-258000
	I0914 10:23:46.958365    3699 config.go:182] Loaded profile config "ha-258000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0914 10:23:46.958676    3699 out.go:270] ! The control-plane node ha-258000 host is not running (will try others): state=Stopped
	! The control-plane node ha-258000 host is not running (will try others): state=Stopped
	W0914 10:23:46.958781    3699 out.go:270] ! The control-plane node ha-258000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-258000-m02 host is not running (will try others): state=Stopped
	I0914 10:23:46.961647    3699 out.go:177] * The control-plane node ha-258000-m03 host is not running: state=Stopped
	I0914 10:23:46.965695    3699 out.go:177]   To start a cluster, run: "minikube start -p ha-258000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-258000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-258000 -n ha-258000: exit status 7 (30.061208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-258000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-699000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-699000 --driver=qemu2 : exit status 80 (10.099837042s)

                                                
                                                
-- stdout --
	* [image-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-699000" primary control-plane node in "image-699000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-699000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-699000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-699000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-699000 -n image-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-699000 -n image-699000: exit status 7 (69.403709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.17s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-097000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-097000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.99049975s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3bd44d45-9968-41f0-92f7-298bb7b5cb8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-097000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"93547fbd-345b-40e9-9547-f3012d098bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"082d3807-7278-45fc-96a2-655dbdfd1c2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig"}}
	{"specversion":"1.0","id":"5f5e8e40-41bf-42af-ab3a-2140d3079594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"66fca01e-331d-4c80-aa42-6c226c4bd587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5ead2ee3-1f6c-41c0-a32c-d05ccc10a29b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube"}}
	{"specversion":"1.0","id":"bcf41585-8b51-495b-a468-dc4d4cd5a3ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1badf3a1-bbca-45b9-bd4d-a1a0046b1a51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c9ee8ab-c5e8-4afc-a929-7e51d7f30548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"db3878e5-a867-416c-80c3-55ea7a3f8843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-097000\" primary control-plane node in \"json-output-097000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dee7c9da-71a2-4fae-88b2-df39fcd602f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2102b009-b4ae-4f25-be87-d22f0e9090e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-097000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"82d3189f-ce2e-4010-9bde-e7891a907ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"49c7175f-3f6e-4840-b6a5-65abe956fc86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fdd4bd47-b366-425c-99bc-3e2e0fc71f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-097000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f56473c6-d46e-4e43-b28a-cb70ebb71bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f786c0af-dd55-440b-b58f-70663ce777f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-097000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.99s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-097000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-097000 --output=json --user=testUser: exit status 83 (74.95ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95e24beb-9e7d-4f72-80fb-bfa55648e8f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-097000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"c5cda9e1-782b-4e97-b90a-6444442705cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-097000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-097000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-097000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-097000 --output=json --user=testUser: exit status 83 (42.90325ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-097000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-097000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-097000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-097000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-763000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-763000 --driver=qemu2 : exit status 80 (9.799698833s)

                                                
                                                
-- stdout --
	* [first-763000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-763000" primary control-plane node in "first-763000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-763000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-763000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-14 10:24:19.675711 -0700 PDT m=+2485.940740209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-764000 -n second-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-764000 -n second-764000: exit status 85 (81.726541ms)

                                                
                                                
-- stdout --
	* Profile "second-764000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-764000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-764000" host is not running, skipping log retrieval (state="* Profile \"second-764000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-764000\"")
helpers_test.go:175: Cleaning up "second-764000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-764000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-14 10:24:19.866451 -0700 PDT m=+2486.131487417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-763000 -n first-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-763000 -n first-763000: exit status 7 (30.44225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-763000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-763000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-512000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-512000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.056915s)

                                                
                                                
-- stdout --
	* [mount-start-1-512000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-512000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-512000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-512000 -n mount-start-1-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-512000 -n mount-start-1-512000: exit status 7 (67.914792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-699000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-699000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.956232625s)

                                                
                                                
-- stdout --
	* [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-699000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:24:30.316488    3854 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:24:30.316606    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:24:30.316609    3854 out.go:358] Setting ErrFile to fd 2...
	I0914 10:24:30.316612    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:24:30.316743    3854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:24:30.317807    3854 out.go:352] Setting JSON to false
	I0914 10:24:30.333841    3854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3233,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:24:30.333910    3854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:24:30.340840    3854 out.go:177] * [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:24:30.349719    3854 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:24:30.349771    3854 notify.go:220] Checking for updates...
	I0914 10:24:30.358829    3854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:24:30.360381    3854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:24:30.363815    3854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:24:30.366770    3854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:24:30.369807    3854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:24:30.372877    3854 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:24:30.376777    3854 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:24:30.383796    3854 start.go:297] selected driver: qemu2
	I0914 10:24:30.383804    3854 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:24:30.383812    3854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:24:30.386168    3854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:24:30.389759    3854 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:24:30.392910    3854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:24:30.392932    3854 cni.go:84] Creating CNI manager for ""
	I0914 10:24:30.392964    3854 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0914 10:24:30.392974    3854 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 10:24:30.393009    3854 start.go:340] cluster config:
	{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:24:30.396737    3854 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:24:30.404784    3854 out.go:177] * Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	I0914 10:24:30.407770    3854 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:24:30.407785    3854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:24:30.407800    3854 cache.go:56] Caching tarball of preloaded images
	I0914 10:24:30.407877    3854 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:24:30.407882    3854 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:24:30.408120    3854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/multinode-699000/config.json ...
	I0914 10:24:30.408132    3854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/multinode-699000/config.json: {Name:mk7830627c6e09175ca82a176cc11451320774d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:24:30.408563    3854 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:24:30.408599    3854 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "multinode-699000"
	I0914 10:24:30.408610    3854 start.go:93] Provisioning new machine with config: &{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:24:30.408643    3854 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:24:30.413841    3854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:24:30.432126    3854 start.go:159] libmachine.API.Create for "multinode-699000" (driver="qemu2")
	I0914 10:24:30.432157    3854 client.go:168] LocalClient.Create starting
	I0914 10:24:30.432230    3854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:24:30.432263    3854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:24:30.432273    3854 main.go:141] libmachine: Parsing certificate...
	I0914 10:24:30.432315    3854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:24:30.432340    3854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:24:30.432349    3854 main.go:141] libmachine: Parsing certificate...
	I0914 10:24:30.432827    3854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:24:30.594485    3854 main.go:141] libmachine: Creating SSH key...
	I0914 10:24:30.673143    3854 main.go:141] libmachine: Creating Disk image...
	I0914 10:24:30.673148    3854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:24:30.673312    3854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:30.682644    3854 main.go:141] libmachine: STDOUT: 
	I0914 10:24:30.682668    3854 main.go:141] libmachine: STDERR: 
	I0914 10:24:30.682729    3854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2 +20000M
	I0914 10:24:30.690606    3854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:24:30.690686    3854 main.go:141] libmachine: STDERR: 
	I0914 10:24:30.690701    3854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:30.690706    3854 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:24:30.690717    3854 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:24:30.690746    3854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:11:ce:5d:17:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:30.692356    3854 main.go:141] libmachine: STDOUT: 
	I0914 10:24:30.692371    3854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:24:30.692393    3854 client.go:171] duration metric: took 260.239667ms to LocalClient.Create
	I0914 10:24:32.694499    3854 start.go:128] duration metric: took 2.285920541s to createHost
	I0914 10:24:32.694557    3854 start.go:83] releasing machines lock for "multinode-699000", held for 2.286033542s
	W0914 10:24:32.694606    3854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:24:32.713795    3854 out.go:177] * Deleting "multinode-699000" in qemu2 ...
	W0914 10:24:32.747550    3854 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:24:32.747574    3854 start.go:729] Will try again in 5 seconds ...
	I0914 10:24:37.749690    3854 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:24:37.750228    3854 start.go:364] duration metric: took 361.625µs to acquireMachinesLock for "multinode-699000"
	I0914 10:24:37.750354    3854 start.go:93] Provisioning new machine with config: &{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:24:37.750647    3854 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:24:37.772366    3854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:24:37.823887    3854 start.go:159] libmachine.API.Create for "multinode-699000" (driver="qemu2")
	I0914 10:24:37.823930    3854 client.go:168] LocalClient.Create starting
	I0914 10:24:37.824054    3854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:24:37.824117    3854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:24:37.824140    3854 main.go:141] libmachine: Parsing certificate...
	I0914 10:24:37.824199    3854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:24:37.824251    3854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:24:37.824265    3854 main.go:141] libmachine: Parsing certificate...
	I0914 10:24:37.824810    3854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:24:38.056644    3854 main.go:141] libmachine: Creating SSH key...
	I0914 10:24:38.179792    3854 main.go:141] libmachine: Creating Disk image...
	I0914 10:24:38.179798    3854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:24:38.179965    3854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:38.189236    3854 main.go:141] libmachine: STDOUT: 
	I0914 10:24:38.189256    3854 main.go:141] libmachine: STDERR: 
	I0914 10:24:38.189307    3854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2 +20000M
	I0914 10:24:38.197113    3854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:24:38.197133    3854 main.go:141] libmachine: STDERR: 
	I0914 10:24:38.197144    3854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:38.197157    3854 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:24:38.197168    3854 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:24:38.197196    3854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:91:4b:74:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:24:38.198855    3854 main.go:141] libmachine: STDOUT: 
	I0914 10:24:38.198876    3854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:24:38.198888    3854 client.go:171] duration metric: took 374.964458ms to LocalClient.Create
	I0914 10:24:40.201004    3854 start.go:128] duration metric: took 2.450381667s to createHost
	I0914 10:24:40.201086    3854 start.go:83] releasing machines lock for "multinode-699000", held for 2.450922625s
	W0914 10:24:40.201396    3854 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:24:40.210545    3854 out.go:201] 
	W0914 10:24:40.217780    3854 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:24:40.217817    3854 out.go:270] * 
	* 
	W0914 10:24:40.219395    3854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:24:40.230474    3854 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-699000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (67.933541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (99.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.173667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-699000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- rollout status deployment/busybox: exit status 1 (58.111084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.634167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.495084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.073583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.491ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.714791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.566584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.388417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.3475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.281375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.690917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.649417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.565958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.422625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.500833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.961708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (99.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-699000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.123083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (29.74375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-699000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-699000 -v 3 --alsologtostderr: exit status 83 (41.470875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-699000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-699000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:19.495890    3986 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:19.496064    3986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.496067    3986 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:19.496070    3986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.496210    3986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:19.496437    3986 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:19.496628    3986 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:19.501953    3986 out.go:177] * The control-plane node multinode-699000 host is not running: state=Stopped
	I0914 10:26:19.505979    3986 out.go:177]   To start a cluster, run: "minikube start -p multinode-699000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-699000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.841459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-699000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-699000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.685458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-699000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-699000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-699000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.771083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-699000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-699000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-699000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-699000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.771875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status --output json --alsologtostderr: exit status 7 (30.5335ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-699000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:19.705525    3998 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:19.705682    3998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.705685    3998 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:19.705687    3998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.705818    3998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:19.705939    3998 out.go:352] Setting JSON to true
	I0914 10:26:19.705947    3998 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:19.705998    3998 notify.go:220] Checking for updates...
	I0914 10:26:19.706148    3998 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:19.706154    3998 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:19.706399    3998 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:19.706403    3998 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:19.706405    3998 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-699000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.31425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 node stop m03: exit status 85 (48.422167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-699000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status: exit status 7 (30.411333ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr: exit status 7 (30.197792ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:19.845558    4006 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:19.845710    4006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.845713    4006 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:19.845715    4006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.845859    4006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:19.845984    4006 out.go:352] Setting JSON to false
	I0914 10:26:19.845993    4006 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:19.846051    4006 notify.go:220] Checking for updates...
	I0914 10:26:19.846194    4006 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:19.846200    4006 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:19.846433    4006 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:19.846438    4006 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:19.846443    4006 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr": multinode-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.372542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.493042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:19.906761    4010 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:19.907023    4010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.907026    4010 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:19.907028    4010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.907167    4010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:19.907399    4010 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:19.907586    4010 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:19.912478    4010 out.go:201] 
	W0914 10:26:19.915576    4010 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0914 10:26:19.915581    4010 out.go:270] * 
	* 
	W0914 10:26:19.917339    4010 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:26:19.920478    4010 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0914 10:26:19.906761    4010 out.go:345] Setting OutFile to fd 1 ...
I0914 10:26:19.907023    4010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:26:19.907026    4010 out.go:358] Setting ErrFile to fd 2...
I0914 10:26:19.907028    4010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:26:19.907167    4010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:26:19.907399    4010 mustload.go:65] Loading cluster: multinode-699000
I0914 10:26:19.907586    4010 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:26:19.912478    4010 out.go:201] 
W0914 10:26:19.915576    4010 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0914 10:26:19.915581    4010 out.go:270] * 
* 
W0914 10:26:19.917339    4010 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 10:26:19.920478    4010 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-699000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (31.360875ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:19.954460    4012 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:19.954609    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.954612    4012 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:19.954614    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:19.954732    4012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:19.955083    4012 out.go:352] Setting JSON to false
	I0914 10:26:19.955097    4012 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:19.955394    4012 notify.go:220] Checking for updates...
	I0914 10:26:19.955464    4012 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:19.955477    4012 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:19.955951    4012 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:19.955956    4012 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:19.955958    4012 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (73.737292ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:20.866460    4014 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:20.866685    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:20.866690    4014 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:20.866693    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:20.866891    4014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:20.867043    4014 out.go:352] Setting JSON to false
	I0914 10:26:20.867056    4014 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:20.867107    4014 notify.go:220] Checking for updates...
	I0914 10:26:20.867348    4014 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:20.867356    4014 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:20.867675    4014 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:20.867680    4014 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:20.867683    4014 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (73.9795ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:22.168678    4018 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:22.168869    4018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:22.168873    4018 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:22.168876    4018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:22.169072    4018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:22.169245    4018 out.go:352] Setting JSON to false
	I0914 10:26:22.169256    4018 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:22.169302    4018 notify.go:220] Checking for updates...
	I0914 10:26:22.169522    4018 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:22.169531    4018 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:22.169835    4018 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:22.169840    4018 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:22.169843    4018 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (73.307834ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:25.011460    4022 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:25.011670    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:25.011674    4022 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:25.011678    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:25.011868    4022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:25.012019    4022 out.go:352] Setting JSON to false
	I0914 10:26:25.012030    4022 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:25.012083    4022 notify.go:220] Checking for updates...
	I0914 10:26:25.012304    4022 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:25.012311    4022 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:25.012618    4022 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:25.012623    4022 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:25.012626    4022 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (75.555709ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:29.884283    4031 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:29.884484    4031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:29.884489    4031 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:29.884493    4031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:29.884680    4031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:29.884862    4031 out.go:352] Setting JSON to false
	I0914 10:26:29.884875    4031 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:29.884922    4031 notify.go:220] Checking for updates...
	I0914 10:26:29.885159    4031 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:29.885169    4031 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:29.885480    4031 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:29.885485    4031 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:29.885488    4031 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (72.079792ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:32.567827    4033 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:32.568050    4033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:32.568054    4033 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:32.568058    4033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:32.568225    4033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:32.568406    4033 out.go:352] Setting JSON to false
	I0914 10:26:32.568417    4033 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:32.568458    4033 notify.go:220] Checking for updates...
	I0914 10:26:32.568691    4033 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:32.568699    4033 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:32.569020    4033 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:32.569025    4033 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:32.569028    4033 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0914 10:26:36.674495    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (74.174167ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:43.267470    4044 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:43.267708    4044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:43.267713    4044 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:43.267716    4044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:43.267893    4044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:43.268053    4044 out.go:352] Setting JSON to false
	I0914 10:26:43.268063    4044 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:43.268099    4044 notify.go:220] Checking for updates...
	I0914 10:26:43.268347    4044 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:43.268355    4044 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:43.268678    4044 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:43.268683    4044 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:43.268686    4044 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0914 10:26:47.860660    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr: exit status 7 (71.03575ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:26:58.445385    4057 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:26:58.445545    4057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:58.445550    4057 out.go:358] Setting ErrFile to fd 2...
	I0914 10:26:58.445553    4057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:26:58.445715    4057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:26:58.445855    4057 out.go:352] Setting JSON to false
	I0914 10:26:58.445867    4057 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:26:58.445909    4057 notify.go:220] Checking for updates...
	I0914 10:26:58.446140    4057 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:26:58.446148    4057 status.go:255] checking status of multinode-699000 ...
	I0914 10:26:58.446463    4057 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:26:58.446468    4057 status.go:343] host is not running, skipping remaining checks
	I0914 10:26:58.446471    4057 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-699000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (33.521042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-699000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-699000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-699000: (2.001947125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-699000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-699000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.211411333s)

                                                
                                                
-- stdout --
	* [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	* Restarting existing qemu2 VM for "multinode-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:27:00.572394    4075 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:27:00.572584    4075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:00.572588    4075 out.go:358] Setting ErrFile to fd 2...
	I0914 10:27:00.572591    4075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:00.572758    4075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:27:00.574054    4075 out.go:352] Setting JSON to false
	I0914 10:27:00.592852    4075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3383,"bootTime":1726331437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:27:00.592946    4075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:27:00.595335    4075 out.go:177] * [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:27:00.602086    4075 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:27:00.602130    4075 notify.go:220] Checking for updates...
	I0914 10:27:00.607943    4075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:27:00.610984    4075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:27:00.612326    4075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:27:00.614950    4075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:27:00.617964    4075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:27:00.621361    4075 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:27:00.621419    4075 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:27:00.625935    4075 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:27:00.632980    4075 start.go:297] selected driver: qemu2
	I0914 10:27:00.632987    4075 start.go:901] validating driver "qemu2" against &{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:27:00.633056    4075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:27:00.635344    4075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:27:00.635368    4075 cni.go:84] Creating CNI manager for ""
	I0914 10:27:00.635392    4075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 10:27:00.635437    4075 start.go:340] cluster config:
	{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:27:00.639006    4075 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:00.646979    4075 out.go:177] * Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	I0914 10:27:00.650957    4075 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:27:00.650973    4075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:27:00.650987    4075 cache.go:56] Caching tarball of preloaded images
	I0914 10:27:00.651045    4075 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:27:00.651050    4075 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:27:00.651112    4075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/multinode-699000/config.json ...
	I0914 10:27:00.651598    4075 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:00.651632    4075 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "multinode-699000"
	I0914 10:27:00.651641    4075 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:27:00.651645    4075 fix.go:54] fixHost starting: 
	I0914 10:27:00.651765    4075 fix.go:112] recreateIfNeeded on multinode-699000: state=Stopped err=<nil>
	W0914 10:27:00.651773    4075 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:27:00.658934    4075 out.go:177] * Restarting existing qemu2 VM for "multinode-699000" ...
	I0914 10:27:00.663127    4075 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:00.663173    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:91:4b:74:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:27:00.665224    4075 main.go:141] libmachine: STDOUT: 
	I0914 10:27:00.665240    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:00.665271    4075 fix.go:56] duration metric: took 13.625833ms for fixHost
	I0914 10:27:00.665277    4075 start.go:83] releasing machines lock for "multinode-699000", held for 13.641875ms
	W0914 10:27:00.665282    4075 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:27:00.665316    4075 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:00.665321    4075 start.go:729] Will try again in 5 seconds ...
	I0914 10:27:05.666984    4075 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:05.667385    4075 start.go:364] duration metric: took 301.958µs to acquireMachinesLock for "multinode-699000"
	I0914 10:27:05.667519    4075 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:27:05.667540    4075 fix.go:54] fixHost starting: 
	I0914 10:27:05.668295    4075 fix.go:112] recreateIfNeeded on multinode-699000: state=Stopped err=<nil>
	W0914 10:27:05.668324    4075 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:27:05.676745    4075 out.go:177] * Restarting existing qemu2 VM for "multinode-699000" ...
	I0914 10:27:05.679816    4075 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:05.680043    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:91:4b:74:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:27:05.689289    4075 main.go:141] libmachine: STDOUT: 
	I0914 10:27:05.689357    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:05.689458    4075 fix.go:56] duration metric: took 21.921ms for fixHost
	I0914 10:27:05.689486    4075 start.go:83] releasing machines lock for "multinode-699000", held for 22.078959ms
	W0914 10:27:05.689696    4075 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:05.696791    4075 out.go:201] 
	W0914 10:27:05.700703    4075 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:27:05.700750    4075 out.go:270] * 
	* 
	W0914 10:27:05.703689    4075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:27:05.711705    4075 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-699000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-699000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (32.646583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 node delete m03: exit status 83 (40.786292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-699000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-699000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-699000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr: exit status 7 (29.744125ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:27:05.896231    4094 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:27:05.896390    4094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:05.896393    4094 out.go:358] Setting ErrFile to fd 2...
	I0914 10:27:05.896395    4094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:05.896524    4094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:27:05.896644    4094 out.go:352] Setting JSON to false
	I0914 10:27:05.896653    4094 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:27:05.896713    4094 notify.go:220] Checking for updates...
	I0914 10:27:05.896903    4094 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:27:05.896909    4094 status.go:255] checking status of multinode-699000 ...
	I0914 10:27:05.897155    4094 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:27:05.897158    4094 status.go:343] host is not running, skipping remaining checks
	I0914 10:27:05.897160    4094 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.448291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-699000 stop: (3.399618833s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status: exit status 7 (65.205083ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr: exit status 7 (33.121791ms)

                                                
                                                
-- stdout --
	multinode-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:27:09.424949    4120 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:27:09.425082    4120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:09.425086    4120 out.go:358] Setting ErrFile to fd 2...
	I0914 10:27:09.425089    4120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:09.425247    4120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:27:09.425360    4120 out.go:352] Setting JSON to false
	I0914 10:27:09.425369    4120 mustload.go:65] Loading cluster: multinode-699000
	I0914 10:27:09.425428    4120 notify.go:220] Checking for updates...
	I0914 10:27:09.425577    4120 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:27:09.425584    4120 status.go:255] checking status of multinode-699000 ...
	I0914 10:27:09.425846    4120 status.go:330] multinode-699000 host status = "Stopped" (err=<nil>)
	I0914 10:27:09.425849    4120 status.go:343] host is not running, skipping remaining checks
	I0914 10:27:09.425851    4120 status.go:257] multinode-699000 status: &{Name:multinode-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr": multinode-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-699000 status --alsologtostderr": multinode-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.882958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-699000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-699000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184842417s)

                                                
                                                
-- stdout --
	* [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	* Restarting existing qemu2 VM for "multinode-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:27:09.485820    4124 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:27:09.485954    4124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:09.485957    4124 out.go:358] Setting ErrFile to fd 2...
	I0914 10:27:09.485960    4124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:09.486113    4124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:27:09.487105    4124 out.go:352] Setting JSON to false
	I0914 10:27:09.503301    4124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3392,"bootTime":1726331437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:27:09.503365    4124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:27:09.508440    4124 out.go:177] * [multinode-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:27:09.516342    4124 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:27:09.516409    4124 notify.go:220] Checking for updates...
	I0914 10:27:09.523354    4124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:27:09.526343    4124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:27:09.529289    4124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:27:09.532341    4124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:27:09.535307    4124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:27:09.538638    4124 config.go:182] Loaded profile config "multinode-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:27:09.538902    4124 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:27:09.543308    4124 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:27:09.550317    4124 start.go:297] selected driver: qemu2
	I0914 10:27:09.550323    4124 start.go:901] validating driver "qemu2" against &{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:27:09.550371    4124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:27:09.552611    4124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:27:09.552632    4124 cni.go:84] Creating CNI manager for ""
	I0914 10:27:09.552657    4124 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 10:27:09.552708    4124 start.go:340] cluster config:
	{Name:multinode-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-699000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:27:09.556256    4124 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:09.565342    4124 out.go:177] * Starting "multinode-699000" primary control-plane node in "multinode-699000" cluster
	I0914 10:27:09.569313    4124 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:27:09.569327    4124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:27:09.569336    4124 cache.go:56] Caching tarball of preloaded images
	I0914 10:27:09.569385    4124 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:27:09.569391    4124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:27:09.569434    4124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/multinode-699000/config.json ...
	I0914 10:27:09.570047    4124 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:09.570078    4124 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "multinode-699000"
	I0914 10:27:09.570087    4124 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:27:09.570094    4124 fix.go:54] fixHost starting: 
	I0914 10:27:09.570220    4124 fix.go:112] recreateIfNeeded on multinode-699000: state=Stopped err=<nil>
	W0914 10:27:09.570230    4124 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:27:09.578320    4124 out.go:177] * Restarting existing qemu2 VM for "multinode-699000" ...
	I0914 10:27:09.582208    4124 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:09.582245    4124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:91:4b:74:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:27:09.584302    4124 main.go:141] libmachine: STDOUT: 
	I0914 10:27:09.584321    4124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:09.584352    4124 fix.go:56] duration metric: took 14.259125ms for fixHost
	I0914 10:27:09.584356    4124 start.go:83] releasing machines lock for "multinode-699000", held for 14.274ms
	W0914 10:27:09.584360    4124 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:27:09.584407    4124 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:09.584412    4124 start.go:729] Will try again in 5 seconds ...
	I0914 10:27:14.586210    4124 start.go:360] acquireMachinesLock for multinode-699000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:14.586595    4124 start.go:364] duration metric: took 307.958µs to acquireMachinesLock for "multinode-699000"
	I0914 10:27:14.586713    4124 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:27:14.586737    4124 fix.go:54] fixHost starting: 
	I0914 10:27:14.587414    4124 fix.go:112] recreateIfNeeded on multinode-699000: state=Stopped err=<nil>
	W0914 10:27:14.587441    4124 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:27:14.592928    4124 out.go:177] * Restarting existing qemu2 VM for "multinode-699000" ...
	I0914 10:27:14.599764    4124 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:14.599996    4124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:91:4b:74:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/multinode-699000/disk.qcow2
	I0914 10:27:14.608826    4124 main.go:141] libmachine: STDOUT: 
	I0914 10:27:14.608891    4124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:14.609011    4124 fix.go:56] duration metric: took 22.274458ms for fixHost
	I0914 10:27:14.609031    4124 start.go:83] releasing machines lock for "multinode-699000", held for 22.414875ms
	W0914 10:27:14.609206    4124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:14.615764    4124 out.go:201] 
	W0914 10:27:14.619878    4124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:27:14.619909    4124 out.go:270] * 
	* 
	W0914 10:27:14.622927    4124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:27:14.629832    4124 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-699000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (70.29325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-699000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-699000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-699000-m01 --driver=qemu2 : exit status 80 (9.850214875s)

                                                
                                                
-- stdout --
	* [multinode-699000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-699000-m01" primary control-plane node in "multinode-699000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-699000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-699000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-699000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-699000-m02 --driver=qemu2 : exit status 80 (10.023323042s)

                                                
                                                
-- stdout --
	* [multinode-699000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-699000-m02" primary control-plane node in "multinode-699000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-699000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-699000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-699000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-699000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-699000: exit status 83 (82.006541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-699000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-699000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-699000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-699000 -n multinode-699000: exit status 7 (30.338375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (10s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.8519205s)

                                                
                                                
-- stdout --
	* [test-preload-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-140000" primary control-plane node in "test-preload-140000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:27:34.957571    4184 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:27:34.957695    4184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:34.957699    4184 out.go:358] Setting ErrFile to fd 2...
	I0914 10:27:34.957701    4184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:27:34.957806    4184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:27:34.958853    4184 out.go:352] Setting JSON to false
	I0914 10:27:34.975219    4184 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3417,"bootTime":1726331437,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:27:34.975285    4184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:27:34.982233    4184 out.go:177] * [test-preload-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:27:34.990242    4184 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:27:34.990321    4184 notify.go:220] Checking for updates...
	I0914 10:27:34.997170    4184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:27:35.000189    4184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:27:35.003154    4184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:27:35.006189    4184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:27:35.009178    4184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:27:35.012504    4184 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:27:35.012563    4184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:27:35.017159    4184 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:27:35.024143    4184 start.go:297] selected driver: qemu2
	I0914 10:27:35.024150    4184 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:27:35.024157    4184 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:27:35.026455    4184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:27:35.029147    4184 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:27:35.032239    4184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:27:35.032256    4184 cni.go:84] Creating CNI manager for ""
	I0914 10:27:35.032283    4184 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:27:35.032288    4184 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:27:35.032334    4184 start.go:340] cluster config:
	{Name:test-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:27:35.036016    4184 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.044177    4184 out.go:177] * Starting "test-preload-140000" primary control-plane node in "test-preload-140000" cluster
	I0914 10:27:35.048043    4184 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0914 10:27:35.048126    4184 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/test-preload-140000/config.json ...
	I0914 10:27:35.048147    4184 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/test-preload-140000/config.json: {Name:mked25a2b9f9b8dfe6609c5e6ca7ba7d9f90abf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:27:35.048150    4184 cache.go:107] acquiring lock: {Name:mke2dcde6b6e0cacbee12e7df28e773e9d60b74a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048178    4184 cache.go:107] acquiring lock: {Name:mk9233d95de44199f65029410f4de74433808da8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048199    4184 cache.go:107] acquiring lock: {Name:mkbfeeab0cc9260fdcd6dcff572ff301af01fbf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048320    4184 cache.go:107] acquiring lock: {Name:mk5954e25b4e65950f5a1ebe2bd0000ed5b90797 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048337    4184 cache.go:107] acquiring lock: {Name:mk3bf3c850d4c70a721a43eab549d4bdb20c12d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048373    4184 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 10:27:35.048387    4184 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 10:27:35.048388    4184 cache.go:107] acquiring lock: {Name:mkc0e2f14cb9e3234a236e1e7eae6f97286248cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048150    4184 cache.go:107] acquiring lock: {Name:mk0e1879ce036f9bece1df1ea78320be055b3a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048467    4184 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:27:35.048494    4184 start.go:360] acquireMachinesLock for test-preload-140000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:35.048489    4184 cache.go:107] acquiring lock: {Name:mkd36deb393738a7effddfb256000f26e4bad9b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:27:35.048560    4184 start.go:364] duration metric: took 59.625µs to acquireMachinesLock for "test-preload-140000"
	I0914 10:27:35.048580    4184 start.go:93] Provisioning new machine with config: &{Name:test-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:27:35.048688    4184 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:27:35.048692    4184 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 10:27:35.048699    4184 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:27:35.048706    4184 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 10:27:35.048788    4184 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 10:27:35.048798    4184 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:27:35.053245    4184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:27:35.061233    4184 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 10:27:35.061298    4184 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 10:27:35.061314    4184 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 10:27:35.061859    4184 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:27:35.064076    4184 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 10:27:35.064125    4184 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:27:35.064138    4184 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:27:35.064138    4184 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 10:27:35.072186    4184 start.go:159] libmachine.API.Create for "test-preload-140000" (driver="qemu2")
	I0914 10:27:35.072205    4184 client.go:168] LocalClient.Create starting
	I0914 10:27:35.072300    4184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:27:35.072333    4184 main.go:141] libmachine: Decoding PEM data...
	I0914 10:27:35.072342    4184 main.go:141] libmachine: Parsing certificate...
	I0914 10:27:35.072377    4184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:27:35.072400    4184 main.go:141] libmachine: Decoding PEM data...
	I0914 10:27:35.072406    4184 main.go:141] libmachine: Parsing certificate...
	I0914 10:27:35.072750    4184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:27:35.235135    4184 main.go:141] libmachine: Creating SSH key...
	I0914 10:27:35.282565    4184 main.go:141] libmachine: Creating Disk image...
	I0914 10:27:35.282590    4184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:27:35.282797    4184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:35.292636    4184 main.go:141] libmachine: STDOUT: 
	I0914 10:27:35.292656    4184 main.go:141] libmachine: STDERR: 
	I0914 10:27:35.292705    4184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2 +20000M
	I0914 10:27:35.302045    4184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:27:35.302074    4184 main.go:141] libmachine: STDERR: 
	I0914 10:27:35.302089    4184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:35.302100    4184 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:27:35.302111    4184 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:35.302152    4184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:41:79:03:fb:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:35.304403    4184 main.go:141] libmachine: STDOUT: 
	I0914 10:27:35.304419    4184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:35.304456    4184 client.go:171] duration metric: took 232.254667ms to LocalClient.Create
	I0914 10:27:35.626061    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 10:27:35.638750    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0914 10:27:35.641198    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 10:27:35.657205    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0914 10:27:35.693484    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0914 10:27:35.722489    4184 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 10:27:35.722521    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 10:27:35.756739    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0914 10:27:35.756766    4184 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 708.620667ms
	I0914 10:27:35.756784    4184 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0914 10:27:35.757906    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0914 10:27:36.256209    4184 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 10:27:36.256320    4184 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 10:27:36.718285    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 10:27:36.718349    4184 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.670284083s
	I0914 10:27:36.718379    4184 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 10:27:37.304641    4184 start.go:128] duration metric: took 2.256041375s to createHost
	I0914 10:27:37.304696    4184 start.go:83] releasing machines lock for "test-preload-140000", held for 2.256237041s
	W0914 10:27:37.304746    4184 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:37.315829    4184 out.go:177] * Deleting "test-preload-140000" in qemu2 ...
	W0914 10:27:37.356494    4184 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:37.356521    4184 start.go:729] Will try again in 5 seconds ...
	I0914 10:27:38.264882    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0914 10:27:38.264925    4184 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.216799292s
	I0914 10:27:38.264949    4184 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0914 10:27:38.459255    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0914 10:27:38.459318    4184 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.411100792s
	I0914 10:27:38.459356    4184 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0914 10:27:39.553523    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0914 10:27:39.553562    4184 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.505636875s
	I0914 10:27:39.553581    4184 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0914 10:27:39.674051    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0914 10:27:39.674092    4184 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.626137292s
	I0914 10:27:39.674115    4184 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0914 10:27:40.657907    4184 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0914 10:27:40.657953    4184 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.609906542s
	I0914 10:27:40.657978    4184 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0914 10:27:42.356489    4184 start.go:360] acquireMachinesLock for test-preload-140000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:27:42.356981    4184 start.go:364] duration metric: took 397.917µs to acquireMachinesLock for "test-preload-140000"
	I0914 10:27:42.357114    4184 start.go:93] Provisioning new machine with config: &{Name:test-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:27:42.357316    4184 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:27:42.367674    4184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:27:42.419608    4184 start.go:159] libmachine.API.Create for "test-preload-140000" (driver="qemu2")
	I0914 10:27:42.419672    4184 client.go:168] LocalClient.Create starting
	I0914 10:27:42.419797    4184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:27:42.419876    4184 main.go:141] libmachine: Decoding PEM data...
	I0914 10:27:42.419895    4184 main.go:141] libmachine: Parsing certificate...
	I0914 10:27:42.419956    4184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:27:42.420000    4184 main.go:141] libmachine: Decoding PEM data...
	I0914 10:27:42.420013    4184 main.go:141] libmachine: Parsing certificate...
	I0914 10:27:42.420525    4184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:27:42.593117    4184 main.go:141] libmachine: Creating SSH key...
	I0914 10:27:42.717531    4184 main.go:141] libmachine: Creating Disk image...
	I0914 10:27:42.717539    4184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:27:42.717712    4184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:42.726950    4184 main.go:141] libmachine: STDOUT: 
	I0914 10:27:42.726965    4184 main.go:141] libmachine: STDERR: 
	I0914 10:27:42.727023    4184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2 +20000M
	I0914 10:27:42.735274    4184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:27:42.735289    4184 main.go:141] libmachine: STDERR: 
	I0914 10:27:42.735310    4184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:42.735316    4184 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:27:42.735326    4184 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:27:42.735365    4184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e4:74:af:3b:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/test-preload-140000/disk.qcow2
	I0914 10:27:42.737125    4184 main.go:141] libmachine: STDOUT: 
	I0914 10:27:42.737139    4184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:27:42.737152    4184 client.go:171] duration metric: took 317.489958ms to LocalClient.Create
	I0914 10:27:44.737299    4184 start.go:128] duration metric: took 2.380056833s to createHost
	I0914 10:27:44.737364    4184 start.go:83] releasing machines lock for "test-preload-140000", held for 2.380463042s
	W0914 10:27:44.737686    4184 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:27:44.747322    4184 out.go:201] 
	W0914 10:27:44.751397    4184 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:27:44.751423    4184 out.go:270] * 
	* 
	W0914 10:27:44.754413    4184 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:27:44.764271    4184 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-14 10:27:44.782117 -0700 PDT m=+2691.075665167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-140000 -n test-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-140000 -n test-preload-140000: exit status 7 (65.929333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-140000
--- FAIL: TestPreload (10.00s)

                                                
                                    
x
+
TestScheduledStopUnix (10.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-053000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-053000 --memory=2048 --driver=qemu2 : exit status 80 (9.956174459s)

                                                
                                                
-- stdout --
	* [scheduled-stop-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-053000" primary control-plane node in "scheduled-stop-053000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-053000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-053000" primary control-plane node in "scheduled-stop-053000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-14 10:27:54.886595 -0700 PDT m=+2701.180601876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-053000 -n scheduled-stop-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-053000 -n scheduled-stop-053000: exit status 7 (68.995583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-053000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-053000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-053000
--- FAIL: TestScheduledStopUnix (10.11s)

                                                
                                    
x
+
TestSkaffold (12.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3869224587 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3869224587 version: (1.069916125s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-611000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-611000 --memory=2600 --driver=qemu2 : exit status 80 (9.827962416s)

                                                
                                                
-- stdout --
	* [skaffold-611000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-611000" primary control-plane node in "skaffold-611000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-611000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-611000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-611000" primary control-plane node in "skaffold-611000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-611000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-14 10:28:07.032057 -0700 PDT m=+2713.326595001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-611000 -n skaffold-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-611000 -n skaffold-611000: exit status 7 (62.714875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-611000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-611000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-611000
--- FAIL: TestSkaffold (12.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (601.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2489978194 start -p running-upgrade-158000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2489978194 start -p running-upgrade-158000 --memory=2200 --vm-driver=qemu2 : (1m3.325631667s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0914 10:31:36.656056    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.285924709s)

                                                
                                                
-- stdout --
	* [running-upgrade-158000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-158000" primary control-plane node in "running-upgrade-158000" cluster
	* Updating the running qemu2 "running-upgrade-158000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:29:52.354106    4633 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:29:52.354243    4633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:29:52.354246    4633 out.go:358] Setting ErrFile to fd 2...
	I0914 10:29:52.354249    4633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:29:52.354378    4633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:29:52.355376    4633 out.go:352] Setting JSON to false
	I0914 10:29:52.372090    4633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3555,"bootTime":1726331437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:29:52.372162    4633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:29:52.377043    4633 out.go:177] * [running-upgrade-158000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:29:52.385071    4633 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:29:52.385119    4633 notify.go:220] Checking for updates...
	I0914 10:29:52.392962    4633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:29:52.396974    4633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:29:52.399995    4633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:29:52.403025    4633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:29:52.405954    4633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:29:52.409243    4633 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:29:52.412990    4633 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 10:29:52.414388    4633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:29:52.418947    4633 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:29:52.425824    4633 start.go:297] selected driver: qemu2
	I0914 10:29:52.425830    4633 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:29:52.425876    4633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:29:52.428226    4633 cni.go:84] Creating CNI manager for ""
	I0914 10:29:52.428261    4633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:29:52.428285    4633 start.go:340] cluster config:
	{Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:29:52.428335    4633 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:29:52.435967    4633 out.go:177] * Starting "running-upgrade-158000" primary control-plane node in "running-upgrade-158000" cluster
	I0914 10:29:52.439960    4633 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:29:52.439974    4633 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 10:29:52.439986    4633 cache.go:56] Caching tarball of preloaded images
	I0914 10:29:52.440050    4633 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:29:52.440056    4633 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 10:29:52.440113    4633 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/config.json ...
	I0914 10:29:52.440543    4633 start.go:360] acquireMachinesLock for running-upgrade-158000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:29:52.440580    4633 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "running-upgrade-158000"
	I0914 10:29:52.440588    4633 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:29:52.440593    4633 fix.go:54] fixHost starting: 
	I0914 10:29:52.441209    4633 fix.go:112] recreateIfNeeded on running-upgrade-158000: state=Running err=<nil>
	W0914 10:29:52.441218    4633 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:29:52.449972    4633 out.go:177] * Updating the running qemu2 "running-upgrade-158000" VM ...
	I0914 10:29:52.453957    4633 machine.go:93] provisionDockerMachine start ...
	I0914 10:29:52.453995    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.454116    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.454122    4633 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 10:29:52.505142    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-158000
	
	I0914 10:29:52.505156    4633 buildroot.go:166] provisioning hostname "running-upgrade-158000"
	I0914 10:29:52.505205    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.505320    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.505325    4633 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-158000 && echo "running-upgrade-158000" | sudo tee /etc/hostname
	I0914 10:29:52.558237    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-158000
	
	I0914 10:29:52.558282    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.558372    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.558380    4633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-158000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-158000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-158000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 10:29:52.608605    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 10:29:52.608615    4633 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19643-1079/.minikube CaCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19643-1079/.minikube}
	I0914 10:29:52.608625    4633 buildroot.go:174] setting up certificates
	I0914 10:29:52.608632    4633 provision.go:84] configureAuth start
	I0914 10:29:52.608637    4633 provision.go:143] copyHostCerts
	I0914 10:29:52.608712    4633 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem, removing ...
	I0914 10:29:52.608719    4633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem
	I0914 10:29:52.608841    4633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem (1078 bytes)
	I0914 10:29:52.609015    4633 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem, removing ...
	I0914 10:29:52.609019    4633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem
	I0914 10:29:52.609068    4633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem (1123 bytes)
	I0914 10:29:52.609170    4633 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem, removing ...
	I0914 10:29:52.609174    4633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem
	I0914 10:29:52.609220    4633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem (1675 bytes)
	I0914 10:29:52.609311    4633 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-158000 san=[127.0.0.1 localhost minikube running-upgrade-158000]
	I0914 10:29:52.728671    4633 provision.go:177] copyRemoteCerts
	I0914 10:29:52.728714    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 10:29:52.728723    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:29:52.756774    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 10:29:52.765308    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 10:29:52.772087    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 10:29:52.779437    4633 provision.go:87] duration metric: took 170.802417ms to configureAuth
	I0914 10:29:52.779446    4633 buildroot.go:189] setting minikube options for container-runtime
	I0914 10:29:52.779550    4633 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:29:52.779584    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.779673    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.779681    4633 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 10:29:52.829500    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 10:29:52.829510    4633 buildroot.go:70] root file system type: tmpfs
	I0914 10:29:52.829558    4633 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 10:29:52.829616    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.829734    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.829767    4633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 10:29:52.884337    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 10:29:52.884393    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.884511    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.884522    4633 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 10:29:52.937602    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 10:29:52.937612    4633 machine.go:96] duration metric: took 483.669ms to provisionDockerMachine
	I0914 10:29:52.937620    4633 start.go:293] postStartSetup for "running-upgrade-158000" (driver="qemu2")
	I0914 10:29:52.937626    4633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 10:29:52.937694    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 10:29:52.937703    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:29:52.969157    4633 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 10:29:52.970979    4633 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 10:29:52.970988    4633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/addons for local assets ...
	I0914 10:29:52.971076    4633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/files for local assets ...
	I0914 10:29:52.971204    4633 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem -> 16032.pem in /etc/ssl/certs
	I0914 10:29:52.971340    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 10:29:52.974659    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:29:52.982136    4633 start.go:296] duration metric: took 44.512459ms for postStartSetup
	I0914 10:29:52.982151    4633 fix.go:56] duration metric: took 541.581875ms for fixHost
	I0914 10:29:52.982203    4633 main.go:141] libmachine: Using SSH client type: native
	I0914 10:29:52.982317    4633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b81190] 0x100b839d0 <nil>  [] 0s} localhost 50246 <nil> <nil>}
	I0914 10:29:52.982321    4633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 10:29:53.031909    4633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726334992.732640059
	
	I0914 10:29:53.031918    4633 fix.go:216] guest clock: 1726334992.732640059
	I0914 10:29:53.031922    4633 fix.go:229] Guest: 2024-09-14 10:29:52.732640059 -0700 PDT Remote: 2024-09-14 10:29:52.982153 -0700 PDT m=+0.648684001 (delta=-249.512941ms)
	I0914 10:29:53.031934    4633 fix.go:200] guest clock delta is within tolerance: -249.512941ms
	I0914 10:29:53.031937    4633 start.go:83] releasing machines lock for "running-upgrade-158000", held for 591.3775ms
	I0914 10:29:53.032004    4633 ssh_runner.go:195] Run: cat /version.json
	I0914 10:29:53.032015    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:29:53.032004    4633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 10:29:53.032044    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	W0914 10:29:53.032623    4633 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50368->127.0.0.1:50246: read: connection reset by peer
	I0914 10:29:53.032648    4633 retry.go:31] will retry after 162.669721ms: ssh: handshake failed: read tcp 127.0.0.1:50368->127.0.0.1:50246: read: connection reset by peer
	W0914 10:29:53.057459    4633 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 10:29:53.057507    4633 ssh_runner.go:195] Run: systemctl --version
	I0914 10:29:53.059206    4633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 10:29:53.060745    4633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 10:29:53.060774    4633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 10:29:53.065089    4633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 10:29:53.070338    4633 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 10:29:53.070352    4633 start.go:495] detecting cgroup driver to use...
	I0914 10:29:53.070421    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:29:53.075305    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 10:29:53.078747    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 10:29:53.082036    4633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 10:29:53.082069    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 10:29:53.084887    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:29:53.088108    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 10:29:53.091658    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:29:53.095324    4633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 10:29:53.098651    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 10:29:53.101545    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 10:29:53.104549    4633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 10:29:53.107716    4633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 10:29:53.110839    4633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 10:29:53.113385    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:29:53.207556    4633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 10:29:53.218908    4633 start.go:495] detecting cgroup driver to use...
	I0914 10:29:53.218977    4633 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 10:29:53.224854    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:29:53.231394    4633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 10:29:53.271730    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:29:53.276526    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 10:29:53.281295    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:29:53.286412    4633 ssh_runner.go:195] Run: which cri-dockerd
	I0914 10:29:53.287636    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 10:29:53.290508    4633 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 10:29:53.295303    4633 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 10:29:53.397303    4633 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 10:29:53.489023    4633 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 10:29:53.489072    4633 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 10:29:53.494287    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:29:53.587181    4633 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:29:56.311970    4633 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.724884584s)
	I0914 10:29:56.312046    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 10:29:56.316817    4633 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0914 10:29:56.323446    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:29:56.328089    4633 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 10:29:56.411258    4633 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 10:29:56.493848    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:29:56.572791    4633 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 10:29:56.578963    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:29:56.584025    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:29:56.662902    4633 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 10:29:56.707911    4633 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 10:29:56.707991    4633 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 10:29:56.709956    4633 start.go:563] Will wait 60s for crictl version
	I0914 10:29:56.710016    4633 ssh_runner.go:195] Run: which crictl
	I0914 10:29:56.711378    4633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 10:29:56.723352    4633 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 10:29:56.723433    4633 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:29:56.736146    4633 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:29:56.758406    4633 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 10:29:56.758514    4633 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 10:29:56.759810    4633 kubeadm.go:883] updating cluster {Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 10:29:56.759862    4633 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:29:56.759911    4633 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:29:56.770252    4633 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:29:56.770260    4633 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:29:56.770307    4633 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:29:56.773163    4633 ssh_runner.go:195] Run: which lz4
	I0914 10:29:56.774341    4633 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 10:29:56.775550    4633 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 10:29:56.775558    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 10:29:57.730163    4633 docker.go:649] duration metric: took 955.900709ms to copy over tarball
	I0914 10:29:57.730226    4633 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 10:29:58.835272    4633 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.105078875s)
	I0914 10:29:58.835287    4633 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 10:29:58.852041    4633 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:29:58.855734    4633 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 10:29:58.861086    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:29:58.942416    4633 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:30:00.129398    4633 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.187015166s)
	I0914 10:30:00.129531    4633 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:30:00.149647    4633 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:30:00.149654    4633 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:30:00.149659    4633 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 10:30:00.154106    4633 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:30:00.157158    4633 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:30:00.159405    4633 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:30:00.159543    4633 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:30:00.161792    4633 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:30:00.161938    4633 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:30:00.163697    4633 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:30:00.163777    4633 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:30:00.164802    4633 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:30:00.164837    4633 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:30:00.166331    4633 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:30:00.166455    4633 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:30:00.167634    4633 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 10:30:00.167662    4633 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:30:00.168876    4633 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:30:00.169612    4633 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 10:30:00.526549    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:30:00.537990    4633 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 10:30:00.538017    4633 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:30:00.538081    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:30:00.550299    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 10:30:00.567117    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:30:00.574081    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:30:00.582966    4633 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 10:30:00.582985    4633 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:30:00.583050    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:30:00.589244    4633 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 10:30:00.589266    4633 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:30:00.589332    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:30:00.594396    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:30:00.604496    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 10:30:00.606702    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0914 10:30:00.609054    4633 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 10:30:00.609072    4633 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:30:00.609132    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:30:00.618723    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0914 10:30:00.633979    4633 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 10:30:00.634094    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:30:00.644177    4633 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 10:30:00.644197    4633 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:30:00.644260    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:30:00.646350    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 10:30:00.658292    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 10:30:00.658435    4633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:30:00.659571    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 10:30:00.662164    4633 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 10:30:00.662183    4633 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:30:00.662163    4633 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 10:30:00.662242    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 10:30:00.662246    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 10:30:00.672048    4633 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 10:30:00.672074    4633 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 10:30:00.672160    4633 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 10:30:00.682255    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 10:30:00.695249    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 10:30:00.695388    4633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 10:30:00.700862    4633 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 10:30:00.700901    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 10:30:00.723213    4633 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 10:30:00.723231    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0914 10:30:00.751506    4633 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 10:30:00.751526    4633 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:30:00.751532    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0914 10:30:00.794796    4633 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0914 10:30:00.964581    4633 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 10:30:00.964767    4633 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:30:00.984234    4633 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 10:30:00.984266    4633 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:30:00.984370    4633 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:30:02.106882    4633 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.122532583s)
	I0914 10:30:02.106909    4633 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 10:30:02.107215    4633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:30:02.111981    4633 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0914 10:30:02.112032    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0914 10:30:02.175621    4633 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:30:02.175635    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0914 10:30:02.410034    4633 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 10:30:02.410075    4633 cache_images.go:92] duration metric: took 2.260504625s to LoadCachedImages
	W0914 10:30:02.410112    4633 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0914 10:30:02.410118    4633 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 10:30:02.410181    4633 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-158000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 10:30:02.410269    4633 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 10:30:02.423980    4633 cni.go:84] Creating CNI manager for ""
	I0914 10:30:02.423994    4633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:30:02.423999    4633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 10:30:02.424008    4633 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-158000 NodeName:running-upgrade-158000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 10:30:02.424103    4633 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-158000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 10:30:02.424168    4633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 10:30:02.427498    4633 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 10:30:02.427535    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 10:30:02.430923    4633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 10:30:02.436588    4633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 10:30:02.441580    4633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 10:30:02.447225    4633 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 10:30:02.448598    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:30:02.533191    4633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:30:02.538802    4633 certs.go:68] Setting up /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000 for IP: 10.0.2.15
	I0914 10:30:02.538810    4633 certs.go:194] generating shared ca certs ...
	I0914 10:30:02.538818    4633 certs.go:226] acquiring lock for ca certs: {Name:mk7a785a7c5445527aceab92dcaa64cad76e8086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:30:02.538973    4633 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key
	I0914 10:30:02.539033    4633 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key
	I0914 10:30:02.539041    4633 certs.go:256] generating profile certs ...
	I0914 10:30:02.539111    4633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.key
	I0914 10:30:02.539125    4633 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017
	I0914 10:30:02.539135    4633 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 10:30:02.718828    4633 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 ...
	I0914 10:30:02.718838    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017: {Name:mk67081665e97e772c07d120a575c987e5ee80c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:30:02.719139    4633 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017 ...
	I0914 10:30:02.719144    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017: {Name:mkb640f44c99417bbc98d950c2e49c3aee0f9265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:30:02.719267    4633 certs.go:381] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt
	I0914 10:30:02.719470    4633 certs.go:385] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key
	I0914 10:30:02.719655    4633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/proxy-client.key
	I0914 10:30:02.719790    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem (1338 bytes)
	W0914 10:30:02.719822    4633 certs.go:480] ignoring /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603_empty.pem, impossibly tiny 0 bytes
	I0914 10:30:02.719828    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 10:30:02.719848    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem (1078 bytes)
	I0914 10:30:02.719865    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem (1123 bytes)
	I0914 10:30:02.719882    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem (1675 bytes)
	I0914 10:30:02.719920    4633 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:30:02.720270    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 10:30:02.727971    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 10:30:02.735357    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 10:30:02.742858    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 10:30:02.750256    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 10:30:02.756997    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 10:30:02.763589    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 10:30:02.771016    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 10:30:02.778618    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /usr/share/ca-certificates/16032.pem (1708 bytes)
	I0914 10:30:02.785382    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 10:30:02.792022    4633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem --> /usr/share/ca-certificates/1603.pem (1338 bytes)
	I0914 10:30:02.799223    4633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 10:30:02.804222    4633 ssh_runner.go:195] Run: openssl version
	I0914 10:30:02.806091    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16032.pem && ln -fs /usr/share/ca-certificates/16032.pem /etc/ssl/certs/16032.pem"
	I0914 10:30:02.809163    4633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16032.pem
	I0914 10:30:02.810500    4633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 16:59 /usr/share/ca-certificates/16032.pem
	I0914 10:30:02.810528    4633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16032.pem
	I0914 10:30:02.812405    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16032.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 10:30:02.815501    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 10:30:02.818867    4633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:30:02.820213    4633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:30:02.820237    4633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:30:02.821987    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 10:30:02.824614    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1603.pem && ln -fs /usr/share/ca-certificates/1603.pem /etc/ssl/certs/1603.pem"
	I0914 10:30:02.827853    4633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1603.pem
	I0914 10:30:02.829549    4633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 16:59 /usr/share/ca-certificates/1603.pem
	I0914 10:30:02.829577    4633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1603.pem
	I0914 10:30:02.831273    4633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1603.pem /etc/ssl/certs/51391683.0"
	I0914 10:30:02.834674    4633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 10:30:02.836304    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 10:30:02.838225    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 10:30:02.840069    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 10:30:02.842039    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 10:30:02.843975    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 10:30:02.845818    4633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 10:30:02.847735    4633 kubeadm.go:392] StartCluster: {Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:30:02.847818    4633 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:30:02.858929    4633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 10:30:02.862258    4633 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 10:30:02.862269    4633 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 10:30:02.862300    4633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 10:30:02.864935    4633 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:30:02.865183    4633 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-158000" does not appear in /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:30:02.865232    4633 kubeconfig.go:62] /Users/jenkins/minikube-integration/19643-1079/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-158000" cluster setting kubeconfig missing "running-upgrade-158000" context setting]
	I0914 10:30:02.865375    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:30:02.865891    4633 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102159800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:30:02.866209    4633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 10:30:02.868881    4633 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-158000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 10:30:02.868892    4633 kubeadm.go:1160] stopping kube-system containers ...
	I0914 10:30:02.868941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:30:02.880008    4633 docker.go:483] Stopping containers: [80e7f52e5b6c 5786a809988b 1b201ba782ce 81ead812048f 8076dfae9f44 24ffba65710c e80a7246231e 4faf351a970e 3595a9f063bb 2320ee4845a9 ae03d68bb317 b1ac5df931d5 0b978c9ce5a1 83a57c91cc1e]
	I0914 10:30:02.880090    4633 ssh_runner.go:195] Run: docker stop 80e7f52e5b6c 5786a809988b 1b201ba782ce 81ead812048f 8076dfae9f44 24ffba65710c e80a7246231e 4faf351a970e 3595a9f063bb 2320ee4845a9 ae03d68bb317 b1ac5df931d5 0b978c9ce5a1 83a57c91cc1e
	I0914 10:30:02.891255    4633 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 10:30:02.990387    4633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:30:02.995250    4633 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 14 17:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 14 17:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 14 17:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 14 17:29 /etc/kubernetes/scheduler.conf
	
	I0914 10:30:02.995293    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf
	I0914 10:30:02.999191    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:30:02.999222    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:30:03.003025    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf
	I0914 10:30:03.006292    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:30:03.006317    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:30:03.009378    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf
	I0914 10:30:03.012325    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:30:03.012350    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:30:03.015402    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf
	I0914 10:30:03.018477    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:30:03.018503    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:30:03.021128    4633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:30:03.023971    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:30:03.044581    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:30:03.772835    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:30:03.978042    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:30:04.006248    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:30:04.031375    4633 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:30:04.031469    4633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:30:04.533591    4633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:30:05.033517    4633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:30:05.038122    4633 api_server.go:72] duration metric: took 1.006790916s to wait for apiserver process to appear ...
	I0914 10:30:05.038130    4633 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:30:05.038140    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:10.038427    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:10.038535    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:15.039899    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:15.040002    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:20.040713    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:20.040762    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:25.041447    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:25.041555    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:30.042840    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:30.042935    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:35.044452    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:35.044548    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:40.046532    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:40.046630    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:45.049227    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:45.049329    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:50.051970    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:50.052069    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:30:55.054674    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:30:55.054775    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:00.056367    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:00.056470    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:05.058909    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:05.059186    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:05.082114    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:05.082271    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:05.098504    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:05.098603    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:05.110920    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:05.111010    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:05.123406    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:05.123498    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:05.137638    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:05.137723    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:05.148139    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:05.148207    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:05.158098    4633 logs.go:276] 0 containers: []
	W0914 10:31:05.158107    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:05.158173    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:05.168411    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:05.168429    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:05.168435    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:05.179691    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:05.179701    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:05.218999    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:05.219007    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:05.239969    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:05.239981    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:05.253404    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:05.253417    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:05.280033    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:05.280042    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:05.293266    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:05.293277    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:05.311638    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:05.311649    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:05.328684    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:05.328695    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:05.340489    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:05.340501    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:05.351689    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:05.351699    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:05.364880    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:05.364890    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:05.381103    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:05.381117    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:05.392354    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:05.392365    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:05.404099    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:05.404115    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:05.408478    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:05.408484    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:05.477950    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:05.477959    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:07.991257    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:12.993541    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:12.994108    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:13.034370    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:13.034545    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:13.055896    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:13.056004    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:13.070171    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:13.070261    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:13.082967    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:13.083055    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:13.093880    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:13.093953    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:13.104263    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:13.104355    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:13.114015    4633 logs.go:276] 0 containers: []
	W0914 10:31:13.114026    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:13.114087    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:13.124675    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:13.124694    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:13.124700    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:13.160996    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:13.161005    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:13.187802    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:13.187812    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:13.198670    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:13.198680    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:13.202972    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:13.202978    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:13.227449    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:13.227460    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:13.241889    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:13.241899    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:13.259360    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:13.259370    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:13.270336    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:13.270350    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:13.287815    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:13.287824    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:13.300678    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:13.300690    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:13.342739    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:13.342748    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:13.358344    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:13.358357    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:13.373281    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:13.373292    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:13.386951    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:13.386962    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:13.399197    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:13.399206    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:13.410638    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:13.410648    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:15.922796    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:20.925128    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:20.925696    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:20.965855    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:20.966019    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:20.989067    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:20.989193    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:21.003971    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:21.004063    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:21.016029    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:21.016116    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:21.026857    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:21.026941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:21.037485    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:21.037560    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:21.047613    4633 logs.go:276] 0 containers: []
	W0914 10:31:21.047624    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:21.047699    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:21.058279    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:21.058297    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:21.058305    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:21.092784    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:21.092795    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:21.113921    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:21.113930    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:21.125842    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:21.125853    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:21.137370    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:21.137380    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:21.156728    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:21.156739    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:21.170557    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:21.170567    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:21.181274    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:21.181286    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:21.196445    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:21.196455    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:21.207838    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:21.207849    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:21.219192    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:21.219203    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:21.229752    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:21.229762    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:21.268685    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:21.268693    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:21.273377    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:21.273383    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:21.299594    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:21.299602    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:21.319779    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:21.319788    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:21.337431    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:21.337441    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:23.850149    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:28.852694    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:28.853225    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:28.886415    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:28.886568    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:28.906638    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:28.906757    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:28.920299    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:28.920390    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:28.932222    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:28.932310    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:28.942953    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:28.943036    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:28.953098    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:28.953184    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:28.963123    4633 logs.go:276] 0 containers: []
	W0914 10:31:28.963134    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:28.963197    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:28.973446    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:28.973463    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:28.973468    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:28.989185    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:28.989195    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:29.001346    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:29.001355    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:29.018477    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:29.018486    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:29.031408    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:29.031420    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:29.074578    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:29.074593    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:29.086633    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:29.086646    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:29.103697    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:29.103707    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:29.117536    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:29.117545    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:29.130389    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:29.130400    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:29.154656    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:29.154663    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:29.166981    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:29.166992    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:29.204730    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:29.204738    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:29.218464    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:29.218474    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:29.238876    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:29.238889    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:29.250177    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:29.250187    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:29.261358    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:29.261370    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:31.767727    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:36.770336    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:36.770909    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:36.810828    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:36.810993    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:36.831795    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:36.831923    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:36.846500    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:36.846602    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:36.859104    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:36.859196    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:36.874359    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:36.874434    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:36.887611    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:36.887691    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:36.902770    4633 logs.go:276] 0 containers: []
	W0914 10:31:36.902782    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:36.902861    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:36.914273    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:36.914291    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:36.914296    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:36.934618    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:36.934628    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:36.945886    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:36.945898    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:36.970906    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:36.970920    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:37.005578    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:37.005593    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:37.020163    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:37.020173    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:37.037290    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:37.037300    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:37.049358    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:37.049367    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:37.060744    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:37.060753    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:37.065338    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:37.065346    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:37.081916    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:37.081926    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:37.093541    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:37.093557    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:37.108165    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:37.108178    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:37.122667    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:37.122680    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:37.164119    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:37.164130    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:37.181314    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:37.181323    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:37.192533    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:37.192549    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:39.705257    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:44.708132    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:44.708630    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:44.754106    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:44.754258    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:44.773448    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:44.773584    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:44.788010    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:44.788091    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:44.800692    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:44.800780    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:44.811140    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:44.811226    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:44.825095    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:44.825174    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:44.835458    4633 logs.go:276] 0 containers: []
	W0914 10:31:44.835469    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:44.835541    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:44.846714    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:44.846734    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:44.846742    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:44.882125    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:44.882136    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:44.893997    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:44.894008    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:44.905320    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:44.905330    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:44.909578    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:44.909583    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:44.920319    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:44.920330    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:44.935088    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:44.935099    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:44.946502    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:44.946514    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:44.967666    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:44.967676    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:44.983274    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:44.983292    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:44.994262    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:44.994272    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:45.033112    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:45.033119    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:45.047220    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:45.047230    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:45.061998    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:45.062008    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:45.079367    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:45.079381    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:45.091171    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:45.091184    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:45.109201    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:45.109213    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:47.637029    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:31:52.639246    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:31:52.639891    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:31:52.679540    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:31:52.679706    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:31:52.701471    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:31:52.701614    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:31:52.716110    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:31:52.716203    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:31:52.728362    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:31:52.728448    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:31:52.740008    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:31:52.740092    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:31:52.750810    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:31:52.750886    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:31:52.760949    4633 logs.go:276] 0 containers: []
	W0914 10:31:52.760959    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:31:52.761020    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:31:52.772056    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:31:52.772072    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:31:52.772077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:31:52.783607    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:31:52.783618    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:31:52.799102    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:31:52.799113    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:31:52.803757    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:31:52.803766    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:31:52.839265    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:31:52.839277    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:31:52.860301    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:31:52.860311    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:31:52.874845    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:31:52.874858    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:31:52.886326    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:31:52.886337    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:31:52.926870    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:31:52.926883    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:31:52.940819    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:31:52.940831    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:31:52.958228    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:31:52.958238    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:31:52.983526    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:31:52.983532    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:31:52.995374    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:31:52.995384    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:31:53.009856    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:31:53.009868    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:31:53.021719    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:31:53.021730    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:31:53.034373    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:31:53.034385    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:31:53.051492    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:31:53.051504    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:31:55.563370    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:00.566015    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:00.566578    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:00.603923    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:00.604081    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:00.623905    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:00.624040    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:00.639143    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:00.639217    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:00.654088    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:00.654164    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:00.664517    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:00.664591    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:00.674964    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:00.675027    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:00.684463    4633 logs.go:276] 0 containers: []
	W0914 10:32:00.684472    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:00.684530    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:00.694442    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:00.694460    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:00.694466    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:00.710578    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:00.710589    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:00.721962    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:00.721976    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:00.733678    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:00.733686    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:00.745203    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:00.745218    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:00.769388    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:00.769395    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:00.781130    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:00.781140    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:00.794908    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:00.794917    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:00.814594    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:00.814603    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:00.825864    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:00.825876    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:00.840802    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:00.840814    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:00.883238    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:00.883249    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:00.887917    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:00.887926    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:00.925018    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:00.925030    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:00.942119    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:00.942129    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:00.953185    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:00.953196    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:00.971252    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:00.971263    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:03.488668    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:08.490078    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:08.490323    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:08.505252    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:08.505341    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:08.516022    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:08.516105    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:08.527161    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:08.527243    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:08.538085    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:08.538165    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:08.549969    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:08.550054    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:08.561091    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:08.561169    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:08.572829    4633 logs.go:276] 0 containers: []
	W0914 10:32:08.572841    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:08.572911    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:08.584087    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:08.584108    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:08.584113    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:08.609880    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:08.609889    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:08.621994    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:08.622004    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:08.663507    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:08.663518    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:08.701101    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:08.701117    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:08.716828    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:08.716840    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:08.729564    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:08.729576    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:08.741861    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:08.741872    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:08.764044    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:08.764056    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:08.781439    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:08.781455    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:08.798154    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:08.798165    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:08.815290    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:08.815300    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:08.826818    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:08.826832    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:08.831553    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:08.831564    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:08.845533    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:08.845544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:08.865091    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:08.865109    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:08.877912    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:08.877927    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:11.402837    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:16.405435    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:16.405629    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:16.417318    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:16.417412    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:16.427803    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:16.427891    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:16.438621    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:16.438704    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:16.449542    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:16.449629    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:16.460352    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:16.460426    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:16.470952    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:16.471021    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:16.482727    4633 logs.go:276] 0 containers: []
	W0914 10:32:16.482738    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:16.482810    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:16.498478    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:16.498496    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:16.498501    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:16.511034    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:16.511045    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:16.515557    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:16.515563    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:16.557122    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:16.557138    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:16.571790    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:16.571801    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:16.589210    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:16.589220    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:16.601628    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:16.601644    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:16.625848    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:16.625856    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:16.639646    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:16.639656    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:16.651279    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:16.651289    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:16.667389    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:16.667398    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:16.710201    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:16.710209    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:16.725273    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:16.725283    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:16.737076    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:16.737087    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:16.748648    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:16.748659    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:16.769279    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:16.769289    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:16.786996    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:16.787009    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:19.300734    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:24.302854    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:24.303148    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:24.326595    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:24.326720    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:24.342835    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:24.342927    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:24.355601    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:24.355690    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:24.366953    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:24.367044    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:24.377255    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:24.377330    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:24.387943    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:24.388014    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:24.399353    4633 logs.go:276] 0 containers: []
	W0914 10:32:24.399366    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:24.399435    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:24.415341    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:24.415356    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:24.415360    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:24.426678    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:24.426694    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:24.442096    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:24.442105    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:24.453211    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:24.453227    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:24.492737    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:24.492747    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:24.513719    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:24.513731    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:24.528494    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:24.528503    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:24.540036    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:24.540046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:24.555272    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:24.555284    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:24.582052    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:24.582068    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:24.596258    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:24.596268    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:24.629916    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:24.629929    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:24.645037    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:24.645050    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:24.665151    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:24.665166    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:24.686832    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:24.686841    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:24.691712    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:24.691720    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:24.705757    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:24.705766    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:27.220026    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:32.221991    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:32.222122    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:32.234253    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:32.234343    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:32.246134    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:32.246233    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:32.258344    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:32.258439    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:32.273766    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:32.273866    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:32.287242    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:32.287354    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:32.302917    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:32.303007    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:32.314671    4633 logs.go:276] 0 containers: []
	W0914 10:32:32.314685    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:32.314762    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:32.327304    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:32.327323    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:32.327329    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:32.366692    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:32.366715    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:32.384112    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:32.384125    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:32.403270    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:32.403289    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:32.425928    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:32.425950    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:32.441322    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:32.441332    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:32.470041    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:32.470061    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:32.515808    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:32.515827    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:32.533169    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:32.533186    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:32.547607    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:32.547618    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:32.561197    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:32.561208    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:32.576765    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:32.576778    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:32.590836    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:32.590852    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:32.615529    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:32.615544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:32.642021    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:32.642035    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:32.655864    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:32.655880    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:32.670638    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:32.670649    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:35.177601    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:40.179660    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:40.180100    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:40.213213    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:40.213374    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:40.237141    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:40.237252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:40.250610    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:40.250710    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:40.262328    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:40.262419    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:40.273442    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:40.273527    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:40.288484    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:40.288575    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:40.298913    4633 logs.go:276] 0 containers: []
	W0914 10:32:40.298927    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:40.299005    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:40.309334    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:40.309354    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:40.309359    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:40.320943    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:40.320954    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:40.338937    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:40.338948    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:40.343806    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:40.343815    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:40.364961    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:40.364973    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:40.379430    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:40.379442    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:40.397148    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:40.397162    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:40.411558    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:40.411570    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:40.429191    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:40.429201    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:40.441633    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:40.441644    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:40.456956    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:40.456969    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:40.468292    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:40.468307    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:40.479149    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:40.479161    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:40.514151    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:40.514160    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:40.526363    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:40.526373    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:40.551585    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:40.551593    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:40.563402    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:40.563412    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:43.107016    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:48.109064    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:48.109170    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:48.120015    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:48.120104    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:48.131526    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:48.131607    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:48.142356    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:48.142446    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:48.153093    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:48.153181    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:48.163996    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:48.164083    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:48.174897    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:48.174984    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:48.185451    4633 logs.go:276] 0 containers: []
	W0914 10:32:48.185463    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:48.185537    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:48.196031    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:48.196051    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:48.196058    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:48.208795    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:48.208809    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:48.222929    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:48.222941    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:48.236946    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:48.236958    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:48.249796    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:48.249810    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:48.295496    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:48.295514    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:48.301878    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:48.301892    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:48.314863    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:48.314877    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:48.330452    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:48.330463    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:48.348792    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:48.348810    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:48.367178    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:48.367189    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:48.405594    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:48.405612    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:48.427447    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:48.427464    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:48.454509    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:48.454530    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:48.480916    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:48.480930    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:48.504794    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:48.504806    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:48.516993    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:48.517004    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:51.033302    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:32:56.033727    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:32:56.033910    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:32:56.046310    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:32:56.046399    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:32:56.060999    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:32:56.061092    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:32:56.071809    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:32:56.071910    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:32:56.084713    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:32:56.084798    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:32:56.097808    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:32:56.097879    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:32:56.108762    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:32:56.108849    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:32:56.118754    4633 logs.go:276] 0 containers: []
	W0914 10:32:56.118767    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:32:56.118836    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:32:56.129390    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:32:56.129408    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:32:56.129413    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:32:56.140977    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:32:56.140986    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:32:56.156440    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:32:56.156453    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:32:56.176329    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:32:56.176337    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:32:56.187615    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:32:56.187625    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:32:56.205023    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:32:56.205033    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:32:56.223037    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:32:56.223046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:32:56.234162    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:32:56.234174    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:32:56.246513    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:32:56.246526    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:32:56.281376    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:32:56.281388    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:32:56.295479    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:32:56.295492    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:32:56.310379    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:32:56.310390    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:32:56.321734    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:32:56.321744    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:32:56.336259    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:32:56.336268    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:32:56.347460    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:32:56.347469    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:32:56.371683    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:32:56.371697    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:32:56.410822    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:32:56.410830    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:32:58.917158    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:03.919371    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:03.919630    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:03.957766    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:03.957865    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:03.976670    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:03.976748    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:03.988240    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:03.988327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:03.998877    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:03.998959    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:04.009613    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:04.009686    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:04.020211    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:04.020279    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:04.029947    4633 logs.go:276] 0 containers: []
	W0914 10:33:04.029959    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:04.030031    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:04.048420    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:04.048441    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:04.048447    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:04.084841    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:04.084856    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:04.098830    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:04.098840    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:04.116827    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:04.116839    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:04.129060    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:04.129072    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:04.146272    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:04.146282    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:04.158023    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:04.158037    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:04.197491    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:04.197501    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:04.202377    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:04.202385    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:04.213898    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:04.213911    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:04.229149    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:04.229158    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:04.240177    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:04.240187    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:04.260718    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:04.260727    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:04.278069    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:04.278080    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:04.301182    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:04.301188    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:04.324683    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:04.324696    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:04.336007    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:04.336016    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:06.849626    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:11.851767    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:11.851941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:11.869583    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:11.869669    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:11.882634    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:11.882728    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:11.894920    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:11.895019    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:11.907331    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:11.907438    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:11.919937    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:11.920039    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:11.932805    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:11.932900    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:11.945935    4633 logs.go:276] 0 containers: []
	W0914 10:33:11.945950    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:11.946046    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:11.958308    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:11.958328    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:11.958335    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:11.980575    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:11.980589    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:12.024698    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:12.024719    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:12.029824    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:12.029836    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:12.043580    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:12.043591    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:12.057226    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:12.057236    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:12.075849    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:12.075861    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:12.117745    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:12.117759    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:12.143082    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:12.143103    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:12.159140    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:12.159154    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:12.178544    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:12.178567    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:12.191869    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:12.191881    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:12.218636    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:12.218654    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:12.232375    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:12.232393    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:12.254531    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:12.254544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:12.274745    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:12.274765    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:12.291698    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:12.291717    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:14.808187    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:19.810262    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:19.810646    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:19.838213    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:19.838357    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:19.856051    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:19.856159    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:19.869665    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:19.869759    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:19.882351    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:19.882438    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:19.892555    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:19.892643    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:19.902900    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:19.902982    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:19.916146    4633 logs.go:276] 0 containers: []
	W0914 10:33:19.916158    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:19.916228    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:19.926560    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:19.926583    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:19.926589    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:19.961073    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:19.961086    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:19.981657    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:19.981669    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:19.993327    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:19.993339    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:20.010394    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:20.010404    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:20.024334    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:20.024343    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:20.039330    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:20.039341    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:20.056868    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:20.056878    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:20.071541    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:20.071549    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:20.084298    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:20.084310    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:20.108355    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:20.108369    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:20.150241    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:20.150255    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:20.154780    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:20.154788    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:20.166968    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:20.166978    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:20.178827    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:20.178840    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:20.192069    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:20.192079    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:20.207613    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:20.207623    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:22.725364    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:27.727384    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:27.727495    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:27.739966    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:27.740051    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:27.750721    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:27.750801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:27.765982    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:27.766062    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:27.776988    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:27.777075    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:27.787568    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:27.787652    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:27.798344    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:27.798423    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:27.809209    4633 logs.go:276] 0 containers: []
	W0914 10:33:27.809227    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:27.809305    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:27.819732    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:27.819752    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:27.819756    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:27.831837    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:27.831848    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:27.855471    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:27.855500    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:27.895774    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:27.895781    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:27.908096    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:27.908107    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:27.920185    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:27.920196    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:27.937793    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:27.937804    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:27.955233    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:27.955244    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:27.966238    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:27.966248    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:27.981769    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:27.981780    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:27.992722    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:27.992734    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:28.005018    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:28.005028    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:28.019019    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:28.019032    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:28.035225    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:28.035235    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:28.039527    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:28.039533    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:28.076142    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:28.076152    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:28.097575    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:28.097586    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:30.621028    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:35.623067    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:35.623179    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:35.634330    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:35.634418    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:35.645167    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:35.645252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:35.656619    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:35.656701    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:35.667268    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:35.667359    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:35.678398    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:35.678471    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:35.695335    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:35.695415    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:35.705268    4633 logs.go:276] 0 containers: []
	W0914 10:33:35.705281    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:35.705349    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:35.715622    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:35.715638    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:35.715644    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:35.729373    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:35.729383    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:35.746863    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:35.746878    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:35.751172    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:35.751179    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:35.763201    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:35.763211    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:35.781347    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:35.781368    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:35.804778    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:35.804794    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:35.817493    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:35.817505    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:35.852124    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:35.852137    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:35.873987    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:35.874001    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:35.897550    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:35.897561    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:35.909581    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:35.909592    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:35.921676    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:35.921688    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:35.933209    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:35.933220    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:35.972587    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:35.972596    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:35.986235    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:35.986246    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:35.998983    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:35.998995    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:38.512851    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:43.513108    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:43.513376    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:43.537936    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:43.538067    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:43.554185    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:43.554275    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:43.570665    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:43.570733    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:43.581322    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:43.581419    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:43.591590    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:43.591679    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:43.606974    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:43.607058    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:43.617032    4633 logs.go:276] 0 containers: []
	W0914 10:33:43.617044    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:43.617123    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:43.627939    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:43.627957    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:43.627962    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:43.639695    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:43.639709    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:43.652233    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:43.652243    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:43.656541    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:43.656548    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:43.676733    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:43.676746    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:43.690944    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:43.690959    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:43.702678    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:43.702691    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:43.717551    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:43.717564    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:43.757761    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:43.757774    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:43.769439    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:43.769450    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:43.781196    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:43.781209    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:43.803684    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:43.803691    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:43.815725    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:43.815734    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:43.851155    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:43.851164    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:43.865963    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:43.865973    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:43.883862    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:43.883874    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:43.899116    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:43.899128    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:46.426023    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:51.428647    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:51.428897    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:51.447584    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:51.447698    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:51.460898    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:51.460980    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:51.475729    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:51.475803    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:51.486191    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:51.486270    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:51.497833    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:51.497908    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:51.508277    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:51.508364    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:51.518251    4633 logs.go:276] 0 containers: []
	W0914 10:33:51.518263    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:51.518337    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:51.528783    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:51.528800    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:51.528805    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:51.543642    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:51.543654    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:51.548402    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:51.548410    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:51.571576    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:51.571595    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:51.589262    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:51.589277    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:51.601043    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:51.601053    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:51.635911    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:51.635922    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:51.647872    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:51.647883    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:51.665096    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:51.665107    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:51.676791    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:51.676803    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:51.691192    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:51.691202    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:51.704912    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:51.704928    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:51.723840    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:51.723854    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:51.747634    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:51.747642    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:51.789545    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:51.789556    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:51.804400    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:51.804410    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:51.816878    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:51.816889    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:54.335877    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:59.337976    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:59.338102    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:59.350109    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:59.350205    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:59.361032    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:59.361117    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:59.371498    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:59.371577    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:59.381962    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:59.382055    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:59.392768    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:59.392849    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:59.404287    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:59.404371    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:59.415243    4633 logs.go:276] 0 containers: []
	W0914 10:33:59.415256    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:59.415331    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:59.425631    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:59.425648    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:59.425655    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:59.467519    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:59.467530    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:59.478525    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:59.478535    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:59.500936    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:59.500948    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:59.514792    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:59.514803    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:59.526426    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:59.526436    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:59.552832    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:59.552847    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:59.565796    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:59.565806    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:59.576904    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:59.576915    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:59.588626    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:59.588639    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:59.603597    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:59.603608    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:59.615267    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:59.615277    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:59.637775    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:59.637783    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:59.642031    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:59.642038    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:59.678279    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:59.678288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:59.696465    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:59.696476    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:59.710871    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:59.710885    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:34:02.230453    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:07.231868    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:07.231940    4633 kubeadm.go:597] duration metric: took 4m4.379932542s to restartPrimaryControlPlane
	W0914 10:34:07.232000    4633 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 10:34:07.232028    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 10:34:08.201720    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 10:34:08.206741    4633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:34:08.209508    4633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:34:08.212524    4633 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:34:08.212530    4633 kubeadm.go:157] found existing configuration files:
	
	I0914 10:34:08.212562    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf
	I0914 10:34:08.215139    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:34:08.215170    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:34:08.217549    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf
	I0914 10:34:08.220576    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:34:08.220603    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:34:08.223632    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf
	I0914 10:34:08.226075    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:34:08.226101    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:34:08.228838    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf
	I0914 10:34:08.231778    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:34:08.231806    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:34:08.234289    4633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 10:34:08.251846    4633 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 10:34:08.251875    4633 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 10:34:08.310334    4633 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 10:34:08.310425    4633 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 10:34:08.310484    4633 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 10:34:08.362276    4633 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 10:34:08.366543    4633 out.go:235]   - Generating certificates and keys ...
	I0914 10:34:08.366580    4633 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 10:34:08.366610    4633 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 10:34:08.366649    4633 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 10:34:08.366679    4633 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 10:34:08.366744    4633 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 10:34:08.366775    4633 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 10:34:08.366827    4633 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 10:34:08.366871    4633 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 10:34:08.366916    4633 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 10:34:08.366952    4633 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 10:34:08.366972    4633 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 10:34:08.367001    4633 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 10:34:08.619131    4633 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 10:34:08.795955    4633 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 10:34:09.003335    4633 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 10:34:09.077024    4633 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 10:34:09.104852    4633 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 10:34:09.105159    4633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 10:34:09.105184    4633 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 10:34:09.209920    4633 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 10:34:09.214167    4633 out.go:235]   - Booting up control plane ...
	I0914 10:34:09.214218    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 10:34:09.214254    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 10:34:09.214288    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 10:34:09.214376    4633 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 10:34:09.214486    4633 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 10:34:13.213611    4633 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001848 seconds
	I0914 10:34:13.213675    4633 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 10:34:13.217249    4633 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 10:34:13.735323    4633 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 10:34:13.735598    4633 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-158000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 10:34:14.239568    4633 kubeadm.go:310] [bootstrap-token] Using token: tndwzs.bc88b49vrocmhecw
	I0914 10:34:14.245910    4633 out.go:235]   - Configuring RBAC rules ...
	I0914 10:34:14.245962    4633 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 10:34:14.246005    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 10:34:14.248170    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 10:34:14.253401    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 10:34:14.254249    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 10:34:14.255120    4633 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 10:34:14.258399    4633 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 10:34:14.442765    4633 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 10:34:14.643557    4633 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 10:34:14.644097    4633 kubeadm.go:310] 
	I0914 10:34:14.644128    4633 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 10:34:14.644132    4633 kubeadm.go:310] 
	I0914 10:34:14.644169    4633 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 10:34:14.644173    4633 kubeadm.go:310] 
	I0914 10:34:14.644185    4633 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 10:34:14.644214    4633 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 10:34:14.644247    4633 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 10:34:14.644255    4633 kubeadm.go:310] 
	I0914 10:34:14.644283    4633 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 10:34:14.644298    4633 kubeadm.go:310] 
	I0914 10:34:14.644325    4633 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 10:34:14.644329    4633 kubeadm.go:310] 
	I0914 10:34:14.644361    4633 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 10:34:14.644397    4633 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 10:34:14.644440    4633 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 10:34:14.644445    4633 kubeadm.go:310] 
	I0914 10:34:14.644493    4633 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 10:34:14.644559    4633 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 10:34:14.644563    4633 kubeadm.go:310] 
	I0914 10:34:14.644606    4633 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tndwzs.bc88b49vrocmhecw \
	I0914 10:34:14.644668    4633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 \
	I0914 10:34:14.644687    4633 kubeadm.go:310] 	--control-plane 
	I0914 10:34:14.644691    4633 kubeadm.go:310] 
	I0914 10:34:14.644763    4633 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 10:34:14.644769    4633 kubeadm.go:310] 
	I0914 10:34:14.644824    4633 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tndwzs.bc88b49vrocmhecw \
	I0914 10:34:14.644884    4633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 
	I0914 10:34:14.644936    4633 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 10:34:14.644942    4633 cni.go:84] Creating CNI manager for ""
	I0914 10:34:14.644949    4633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:34:14.649463    4633 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 10:34:14.656422    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 10:34:14.659433    4633 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 10:34:14.663969    4633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 10:34:14.664029    4633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 10:34:14.664038    4633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-158000 minikube.k8s.io/updated_at=2024_09_14T10_34_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=running-upgrade-158000 minikube.k8s.io/primary=true
	I0914 10:34:14.714455    4633 kubeadm.go:1113] duration metric: took 50.468375ms to wait for elevateKubeSystemPrivileges
	I0914 10:34:14.714492    4633 ops.go:34] apiserver oom_adj: -16
	I0914 10:34:14.714497    4633 kubeadm.go:394] duration metric: took 4m11.877349167s to StartCluster
	I0914 10:34:14.714507    4633 settings.go:142] acquiring lock: {Name:mk7db576f28fda26cf1d7d854618889d7d4f8a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:34:14.714603    4633 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:34:14.715004    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:34:14.715232    4633 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:34:14.715244    4633 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 10:34:14.715277    4633 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-158000"
	I0914 10:34:14.715283    4633 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-158000"
	I0914 10:34:14.715288    4633 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-158000"
	W0914 10:34:14.715292    4633 addons.go:243] addon storage-provisioner should already be in state true
	I0914 10:34:14.715297    4633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-158000"
	I0914 10:34:14.715304    4633 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0914 10:34:14.715339    4633 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:34:14.716126    4633 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102159800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:34:14.716248    4633 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-158000"
	W0914 10:34:14.716253    4633 addons.go:243] addon default-storageclass should already be in state true
	I0914 10:34:14.716260    4633 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0914 10:34:14.719434    4633 out.go:177] * Verifying Kubernetes components...
	I0914 10:34:14.719771    4633 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 10:34:14.723693    4633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 10:34:14.723707    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:34:14.727401    4633 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:34:14.731536    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:34:14.734438    4633 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:34:14.734444    4633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 10:34:14.734450    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:34:14.828362    4633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:34:14.834107    4633 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:34:14.834161    4633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:34:14.837945    4633 api_server.go:72] duration metric: took 122.707834ms to wait for apiserver process to appear ...
	I0914 10:34:14.837954    4633 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:34:14.837961    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:14.856680    4633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:34:14.924949    4633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 10:34:15.211214    4633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 10:34:15.211227    4633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 10:34:19.839885    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:19.839936    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:24.840110    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:24.840142    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:29.840289    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:29.840323    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:34.840572    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:34.840604    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:39.841021    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:39.841085    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:44.841647    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:44.841668    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 10:34:45.211423    4633 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 10:34:45.215754    4633 out.go:177] * Enabled addons: storage-provisioner
	I0914 10:34:45.222651    4633 addons.go:510] duration metric: took 30.508691458s for enable addons: enabled=[storage-provisioner]
	I0914 10:34:49.842357    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:49.842381    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:54.843037    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:54.843075    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:59.844566    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:59.844608    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:04.846449    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:04.846495    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:09.848505    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:09.848529    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:14.850503    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:14.850611    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:14.862421    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:14.862510    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:14.873262    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:14.873336    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:14.883745    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:14.883833    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:14.893909    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:14.893983    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:14.904472    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:14.904555    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:14.915362    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:14.915454    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:14.925683    4633 logs.go:276] 0 containers: []
	W0914 10:35:14.925694    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:14.925764    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:14.935732    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:14.935745    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:14.935751    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:14.950730    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:14.950741    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:14.962277    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:14.962286    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:14.973725    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:14.973736    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:15.007242    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:15.007257    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:15.021790    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:15.021806    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:15.035567    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:15.035577    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:15.047753    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:15.047765    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:15.065450    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:15.065459    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:15.077287    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:15.077297    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:15.111430    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:15.111441    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:15.116430    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:15.116437    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:15.141121    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:15.141129    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:17.654075    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:22.656088    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:22.656252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:22.669287    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:22.669389    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:22.680885    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:22.680974    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:22.691550    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:22.691637    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:22.702094    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:22.702163    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:22.712554    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:22.712639    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:22.722931    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:22.723016    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:22.733295    4633 logs.go:276] 0 containers: []
	W0914 10:35:22.733306    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:22.733374    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:22.745318    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:22.745335    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:22.745341    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:22.756277    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:22.756287    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:22.760903    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:22.760912    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:22.795250    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:22.795262    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:22.816811    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:22.816822    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:22.834661    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:22.834673    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:22.849953    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:22.849970    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:22.862179    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:22.862189    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:22.873951    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:22.873962    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:22.898983    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:22.898996    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:22.933697    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:22.933705    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:22.947288    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:22.947299    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:22.958595    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:22.958606    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:25.471986    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:30.474151    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:30.474580    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:30.524499    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:30.524634    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:30.540104    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:30.540215    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:30.553714    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:30.553808    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:30.564291    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:30.564369    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:30.574567    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:30.574649    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:30.585235    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:30.585324    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:30.595814    4633 logs.go:276] 0 containers: []
	W0914 10:35:30.595823    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:30.595890    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:30.606062    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:30.606078    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:30.606083    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:30.617635    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:30.617646    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:30.652323    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:30.652333    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:30.687063    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:30.687077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:30.699032    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:30.699043    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:30.714050    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:30.714063    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:30.731560    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:30.731571    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:30.755802    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:30.755822    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:30.767561    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:30.767573    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:30.772511    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:30.772521    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:30.787220    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:30.787231    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:30.804462    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:30.804472    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:30.816270    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:30.816280    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:33.330291    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:38.332906    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:38.333443    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:38.373239    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:38.373409    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:38.399591    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:38.399710    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:38.422398    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:38.422489    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:38.433664    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:38.433748    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:38.444775    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:38.444858    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:38.455308    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:38.455384    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:38.465822    4633 logs.go:276] 0 containers: []
	W0914 10:35:38.465833    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:38.465909    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:38.476131    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:38.476146    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:38.476152    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:38.511339    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:38.511351    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:38.526250    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:38.526259    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:38.541484    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:38.541493    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:38.552871    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:38.552881    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:38.568218    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:38.568227    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:38.591255    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:38.591261    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:38.602874    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:38.602890    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:38.607611    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:38.607620    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:38.641600    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:38.641611    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:38.654747    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:38.654757    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:38.666451    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:38.666461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:38.683676    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:38.683690    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:41.197556    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:46.198910    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:46.199087    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:46.214346    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:46.214453    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:46.226241    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:46.226327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:46.236714    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:46.236801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:46.246688    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:46.246760    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:46.257507    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:46.257598    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:46.268964    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:46.269046    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:46.279457    4633 logs.go:276] 0 containers: []
	W0914 10:35:46.279472    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:46.279548    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:46.289913    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:46.289934    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:46.289939    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:46.323082    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:46.323090    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:46.334509    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:46.334518    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:46.347916    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:46.347925    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:46.367903    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:46.367910    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:46.372556    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:46.372563    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:46.406537    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:46.406548    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:46.421259    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:46.421271    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:46.438679    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:46.438689    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:46.450534    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:46.450544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:46.473768    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:46.473778    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:46.485147    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:46.485157    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:46.509135    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:46.509145    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:49.022944    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:54.025105    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:54.025289    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:54.043248    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:54.043335    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:54.054480    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:54.054564    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:54.065927    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:54.066014    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:54.077178    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:54.077262    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:54.087704    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:54.087785    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:54.098210    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:54.098294    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:54.108847    4633 logs.go:276] 0 containers: []
	W0914 10:35:54.108857    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:54.108927    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:54.119129    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:54.119147    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:54.119152    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:54.135450    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:54.135464    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:54.150429    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:54.150439    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:54.166382    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:54.166394    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:54.177918    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:54.177927    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:54.212976    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:54.212985    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:54.227039    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:54.227048    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:54.240942    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:54.240955    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:54.251996    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:54.252006    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:54.276772    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:54.276781    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:54.281266    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:54.281276    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:54.316810    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:54.316827    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:54.335404    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:54.335414    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:56.850168    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:01.852293    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:01.852579    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:01.882325    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:01.882461    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:01.905569    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:01.905672    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:01.918755    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:36:01.918842    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:01.929680    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:01.929760    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:01.940208    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:01.940288    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:01.951156    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:01.951230    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:01.963210    4633 logs.go:276] 0 containers: []
	W0914 10:36:01.963222    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:01.963303    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:01.974034    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:01.974049    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:01.974054    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:02.009569    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:02.009576    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:02.027770    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:02.027785    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:02.039107    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:02.039117    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:02.051719    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:02.051729    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:02.063527    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:02.063538    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:02.088315    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:02.088326    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:02.092475    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:02.092481    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:02.132617    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:02.132629    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:02.146891    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:02.146904    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:02.158160    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:02.158171    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:02.173301    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:02.173316    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:02.190904    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:02.190915    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:04.704251    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:09.706509    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:09.706991    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:09.749011    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:09.749170    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:09.768844    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:09.768950    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:09.784209    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:36:09.784303    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:09.796061    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:09.796149    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:09.806370    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:09.806456    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:09.816823    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:09.816904    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:09.827876    4633 logs.go:276] 0 containers: []
	W0914 10:36:09.827887    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:09.827958    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:09.838525    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:09.838541    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:09.838548    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:09.843428    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:09.843434    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:09.857532    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:09.857542    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:09.869062    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:09.869077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:09.883951    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:09.883961    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:09.903650    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:09.903662    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:09.915190    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:09.915204    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:09.926821    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:09.926834    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:09.960645    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:09.960658    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:09.995725    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:09.995739    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:10.009508    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:10.009522    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:10.020576    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:10.020589    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:10.032028    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:10.032043    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:12.557226    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:17.559445    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:17.559632    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:17.575282    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:17.575384    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:17.587070    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:17.587145    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:17.598078    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:17.598148    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:17.609369    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:17.609458    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:17.620757    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:17.620838    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:17.630531    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:17.630613    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:17.641492    4633 logs.go:276] 0 containers: []
	W0914 10:36:17.641504    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:17.641578    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:17.651928    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:17.651944    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:17.651950    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:17.666902    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:17.666911    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:17.684047    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:17.684057    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:17.695703    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:17.695713    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:17.717094    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:17.717103    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:17.736321    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:17.736330    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:17.760350    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:17.760362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:17.795248    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:17.795256    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:17.832278    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:17.832289    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:17.843355    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:17.843369    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:17.855446    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:17.855460    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:17.860124    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:17.860130    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:17.875516    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:17.875525    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:17.889315    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:17.889325    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:17.901047    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:17.901058    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:20.414839    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:25.417004    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:25.417294    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:25.445883    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:25.446021    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:25.463759    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:25.463867    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:25.477532    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:25.477629    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:25.489332    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:25.489417    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:25.499954    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:25.500037    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:25.510377    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:25.510463    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:25.521275    4633 logs.go:276] 0 containers: []
	W0914 10:36:25.521285    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:25.521350    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:25.531987    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:25.532003    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:25.532008    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:25.547245    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:25.547256    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:25.559305    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:25.559315    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:25.570918    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:25.570929    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:25.594354    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:25.594362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:25.627759    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:25.627774    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:25.639539    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:25.639550    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:25.651615    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:25.651630    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:25.663628    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:25.663638    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:25.698198    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:25.698208    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:25.702893    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:25.702900    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:25.717982    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:25.717992    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:25.729646    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:25.729658    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:25.747372    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:25.747385    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:25.759078    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:25.759095    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:28.279024    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:33.281123    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:33.281364    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:33.300840    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:33.300955    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:33.318694    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:33.318786    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:33.330915    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:33.331001    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:33.341622    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:33.341706    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:33.352468    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:33.352553    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:33.363203    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:33.363276    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:33.373425    4633 logs.go:276] 0 containers: []
	W0914 10:36:33.373440    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:33.373513    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:33.388373    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:33.388392    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:33.388398    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:33.402311    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:33.402326    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:33.414117    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:33.414128    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:33.426340    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:33.426350    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:33.437779    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:33.437792    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:33.473118    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:33.473128    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:33.507906    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:33.507922    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:33.519242    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:33.519253    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:33.531962    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:33.531974    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:33.551249    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:33.551259    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:33.555865    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:33.555872    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:33.570315    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:33.570328    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:33.582438    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:33.582448    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:33.599870    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:33.599879    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:33.611593    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:33.611605    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:36.136839    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:41.139376    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:41.139901    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:41.183925    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:41.184086    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:41.203443    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:41.203552    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:41.217879    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:41.217972    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:41.229943    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:41.230033    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:41.242029    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:41.242112    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:41.256542    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:41.256624    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:41.267652    4633 logs.go:276] 0 containers: []
	W0914 10:36:41.267664    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:41.267734    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:41.277596    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:41.277612    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:41.277617    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:41.292094    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:41.292108    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:41.303894    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:41.303903    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:41.315365    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:41.315378    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:41.326851    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:41.326861    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:41.338227    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:41.338240    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:41.351001    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:41.351011    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:41.365449    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:41.365461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:41.378638    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:41.378651    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:41.404153    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:41.404163    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:41.408769    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:41.408776    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:41.461188    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:41.461199    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:41.475669    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:41.475679    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:41.487206    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:41.487216    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:41.509345    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:41.509355    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:44.045123    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:49.047631    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:49.047953    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:49.073480    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:49.073608    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:49.091323    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:49.091427    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:49.104363    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:49.104444    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:49.115013    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:49.115102    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:49.125688    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:49.125761    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:49.136656    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:49.136739    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:49.147009    4633 logs.go:276] 0 containers: []
	W0914 10:36:49.147024    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:49.147106    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:49.157509    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:49.157527    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:49.157533    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:49.193060    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:49.193071    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:49.207656    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:49.207670    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:49.218995    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:49.219005    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:49.230835    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:49.230847    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:49.242482    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:49.242492    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:49.259148    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:49.259158    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:49.270551    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:49.270561    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:49.282352    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:49.282362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:49.287016    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:49.287022    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:49.320500    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:49.320514    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:49.332149    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:49.332165    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:49.349278    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:49.349288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:49.364183    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:49.364192    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:49.376147    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:49.376157    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:51.901495    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:56.901796    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:56.902005    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:56.916821    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:56.916926    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:56.928819    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:56.928910    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:56.939542    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:56.939623    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:56.949503    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:56.949577    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:56.963179    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:56.963252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:56.974139    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:56.974214    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:56.984350    4633 logs.go:276] 0 containers: []
	W0914 10:36:56.984361    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:56.984440    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:56.995323    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:56.995340    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:56.995345    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:57.000183    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:57.000190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:57.012955    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:57.012965    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:57.028121    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:57.028131    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:57.039995    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:57.040006    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:57.051753    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:57.051763    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:57.069067    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:57.069077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:57.080515    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:57.080526    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:57.115267    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:57.115276    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:57.149795    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:57.149805    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:57.164105    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:57.164114    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:57.175582    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:57.175593    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:57.187414    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:57.187425    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:57.199276    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:57.199288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:57.213909    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:57.213923    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:59.740549    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:04.742622    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:04.742742    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:04.763576    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:04.763666    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:04.779044    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:04.779132    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:04.790489    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:04.790571    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:04.802755    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:04.802829    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:04.815331    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:04.815413    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:04.826758    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:04.826850    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:04.837386    4633 logs.go:276] 0 containers: []
	W0914 10:37:04.837398    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:04.837471    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:04.848244    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:04.848261    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:04.848267    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:04.860229    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:04.860240    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:04.898953    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:04.898964    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:04.910675    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:04.910686    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:04.930715    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:04.930724    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:04.948977    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:04.948987    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:04.961013    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:04.961026    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:04.995686    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:04.995698    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:05.007945    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:05.007958    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:05.013187    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:05.013195    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:05.027513    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:05.027523    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:05.039464    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:05.039473    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:05.057810    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:05.057824    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:05.069275    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:05.069285    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:05.080566    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:05.080576    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:07.607824    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:12.609955    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:12.610254    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:12.632940    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:12.633080    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:12.648521    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:12.648606    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:12.662434    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:12.662527    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:12.673466    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:12.673546    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:12.683507    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:12.683591    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:12.693676    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:12.693754    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:12.703940    4633 logs.go:276] 0 containers: []
	W0914 10:37:12.703951    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:12.704020    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:12.714319    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:12.714337    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:12.714343    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:12.727375    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:12.727388    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:12.732176    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:12.732182    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:12.746450    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:12.746461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:12.759943    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:12.759955    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:12.772564    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:12.772576    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:12.784418    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:12.784433    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:12.796050    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:12.796061    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:12.807583    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:12.807595    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:12.843507    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:12.843522    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:12.855727    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:12.855740    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:12.891109    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:12.891121    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:12.907885    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:12.907898    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:12.923135    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:12.923148    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:12.940543    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:12.940555    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:15.467716    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:20.470192    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:20.470440    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:20.494836    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:20.494987    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:20.513583    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:20.513676    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:20.533917    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:20.534012    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:20.544775    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:20.544861    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:20.558173    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:20.558255    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:20.569401    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:20.569480    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:20.579938    4633 logs.go:276] 0 containers: []
	W0914 10:37:20.579953    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:20.580017    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:20.590817    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:20.590835    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:20.590840    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:20.604245    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:20.604259    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:20.609130    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:20.609137    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:20.622769    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:20.622778    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:20.640342    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:20.640353    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:20.676427    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:20.676437    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:20.691393    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:20.691405    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:20.716890    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:20.716898    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:20.728777    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:20.728788    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:20.765987    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:20.766000    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:20.778351    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:20.778361    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:20.794352    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:20.794363    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:20.812278    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:20.812288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:20.824339    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:20.824350    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:20.836034    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:20.836046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:23.352206    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:28.354322    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:28.354801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:28.386430    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:28.386584    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:28.406049    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:28.406173    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:28.420700    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:28.420792    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:28.432603    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:28.432680    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:28.443377    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:28.443467    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:28.460733    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:28.460817    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:28.471417    4633 logs.go:276] 0 containers: []
	W0914 10:37:28.471432    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:28.471508    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:28.483643    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:28.483661    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:28.483668    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:28.495493    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:28.495504    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:28.512018    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:28.512027    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:28.524135    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:28.524144    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:28.536958    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:28.536969    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:28.551385    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:28.551395    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:28.563452    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:28.563462    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:28.582648    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:28.582658    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:28.597972    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:28.597982    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:28.602401    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:28.602411    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:28.636954    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:28.636966    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:28.648994    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:28.649006    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:28.665356    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:28.665366    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:28.683749    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:28.683758    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:28.710200    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:28.710213    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:31.247312    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:36.249490    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:36.249619    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:36.260686    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:36.260768    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:36.271670    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:36.271764    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:36.282929    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:36.283009    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:36.293810    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:36.293895    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:36.304473    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:36.304550    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:36.316237    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:36.316327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:36.328195    4633 logs.go:276] 0 containers: []
	W0914 10:37:36.328208    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:36.328285    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:36.339640    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:36.339661    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:36.339667    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:36.376574    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:36.376593    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:36.381386    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:36.381392    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:36.416057    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:36.416067    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:36.431024    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:36.431043    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:36.444473    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:36.444484    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:36.457014    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:36.457025    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:36.472426    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:36.472439    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:36.490566    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:36.490582    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:36.504952    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:36.504963    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:36.519202    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:36.519213    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:36.530882    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:36.530898    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:36.542593    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:36.542603    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:36.565828    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:36.565835    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:36.577765    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:36.577775    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:39.091286    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:44.093386    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:44.093818    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:44.128062    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:44.128230    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:44.146583    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:44.146699    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:44.161476    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:44.161579    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:44.180949    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:44.181036    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:44.191716    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:44.191802    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:44.202618    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:44.202704    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:44.213246    4633 logs.go:276] 0 containers: []
	W0914 10:37:44.213259    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:44.213338    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:44.224033    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:44.224056    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:44.224062    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:44.240199    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:44.240210    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:44.252259    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:44.252272    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:44.267928    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:44.267944    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:44.280275    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:44.280286    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:44.294478    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:44.294487    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:44.308677    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:44.308687    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:44.320535    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:44.320551    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:44.354849    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:44.354861    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:44.390520    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:44.390534    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:44.402180    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:44.402190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:44.413436    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:44.413450    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:44.432451    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:44.432460    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:44.455501    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:44.455510    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:44.466989    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:44.467004    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:46.973747    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:51.975962    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:51.976151    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:51.988271    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:51.988358    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:51.998445    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:51.998536    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:52.011761    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:52.011846    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:52.022680    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:52.022776    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:52.036226    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:52.036307    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:52.050852    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:52.050941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:52.061651    4633 logs.go:276] 0 containers: []
	W0914 10:37:52.061665    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:52.061742    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:52.073550    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:52.073573    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:52.073578    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:52.078525    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:52.078532    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:52.116465    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:52.116476    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:52.134895    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:52.134915    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:52.147432    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:52.147445    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:52.183158    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:52.183175    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:52.201301    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:52.201316    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:52.212905    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:52.212915    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:52.232153    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:52.232164    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:52.244295    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:52.244308    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:52.259865    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:52.259875    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:52.276597    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:52.276608    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:52.299592    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:52.299603    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:52.311036    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:52.311046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:52.323741    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:52.323752    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:54.841147    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:59.843200    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:59.843307    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:59.854604    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:59.854702    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:59.865540    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:59.865627    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:59.876673    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:59.876765    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:59.887605    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:59.887686    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:59.898237    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:59.898321    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:59.914122    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:59.914203    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:59.928592    4633 logs.go:276] 0 containers: []
	W0914 10:37:59.928604    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:59.928672    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:59.939243    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:59.939260    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:59.939265    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:59.954080    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:59.954090    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:59.965676    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:59.965685    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:59.977104    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:59.977112    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:59.995451    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:59.995461    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:00.007037    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:00.007051    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:00.041124    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:38:00.041133    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:38:00.052889    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:00.052901    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:00.086779    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:38:00.086794    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:38:00.100954    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:38:00.100964    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:38:00.111938    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:00.111947    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:00.136806    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:00.136820    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:00.141292    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:38:00.141300    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:38:00.156877    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:38:00.156888    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:38:00.172408    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:38:00.172418    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:38:02.686192    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:07.688258    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:07.688441    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:38:07.703947    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:38:07.704044    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:38:07.715343    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:38:07.715434    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:38:07.726505    4633 logs.go:276] 5 containers: [15fe6196b690 bfd281589cff 426f46946fcd 40433f7e0d05 a39016b44acb]
	I0914 10:38:07.726583    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:38:07.738994    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:38:07.739092    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:38:07.749888    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:38:07.749973    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:38:07.760052    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:38:07.760133    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:38:07.770375    4633 logs.go:276] 0 containers: []
	W0914 10:38:07.770387    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:38:07.770460    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:38:07.784525    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:38:07.784542    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:38:07.784547    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:38:07.798354    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:38:07.798364    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:38:07.810428    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:38:07.810440    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:38:07.826317    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:38:07.826326    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:38:07.837983    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:38:07.837993    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:07.858265    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:07.858279    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:07.892942    4633 logs.go:123] Gathering logs for coredns [15fe6196b690] ...
	I0914 10:38:07.892957    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15fe6196b690"
	I0914 10:38:07.904177    4633 logs.go:123] Gathering logs for coredns [bfd281589cff] ...
	I0914 10:38:07.904190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd281589cff"
	I0914 10:38:07.915326    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:07.915342    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:07.940298    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:07.940308    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:07.944968    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:38:07.944975    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:38:07.960108    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:38:07.960121    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:38:07.971858    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:07.971872    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:08.009661    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:38:08.009676    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:38:08.030953    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:38:08.030963    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:38:08.042668    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:38:08.042677    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	W0914 10:38:08.053131    4633 logs.go:130] failed coredns [a39016b44acb]: command: /bin/bash -c "docker logs --tail 400 a39016b44acb" /bin/bash -c "docker logs --tail 400 a39016b44acb": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: a39016b44acb
	 output: 
	** stderr ** 
	Error: No such container: a39016b44acb
	
	** /stderr **
	I0914 10:38:10.554360    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:15.555675    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:15.561447    4633 out.go:201] 
	W0914 10:38:15.565325    4633 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 10:38:15.565335    4633 out.go:270] * 
	* 
	W0914 10:38:15.566158    4633 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:38:15.577286    4633 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-14 10:38:15.690261 -0700 PDT m=+3322.010392792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-158000 -n running-upgrade-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-158000 -n running-upgrade-158000: exit status 2 (15.652260375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-158000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-203000          | force-systemd-flag-203000 | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-788000              | force-systemd-env-788000  | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-788000           | force-systemd-env-788000  | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT | 14 Sep 24 10:28 PDT |
	| start   | -p docker-flags-413000                | docker-flags-413000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-203000             | force-systemd-flag-203000 | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-203000          | force-systemd-flag-203000 | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT | 14 Sep 24 10:28 PDT |
	| start   | -p cert-expiration-528000             | cert-expiration-528000    | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-413000 ssh               | docker-flags-413000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-413000 ssh               | docker-flags-413000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-413000                | docker-flags-413000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT | 14 Sep 24 10:28 PDT |
	| start   | -p cert-options-811000                | cert-options-811000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-811000 ssh               | cert-options-811000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-811000 -- sudo        | cert-options-811000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-811000                | cert-options-811000       | jenkins | v1.34.0 | 14 Sep 24 10:28 PDT | 14 Sep 24 10:28 PDT |
	| start   | -p running-upgrade-158000             | minikube                  | jenkins | v1.26.0 | 14 Sep 24 10:28 PDT | 14 Sep 24 10:29 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-158000             | running-upgrade-158000    | jenkins | v1.34.0 | 14 Sep 24 10:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-528000             | cert-expiration-528000    | jenkins | v1.34.0 | 14 Sep 24 10:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-528000             | cert-expiration-528000    | jenkins | v1.34.0 | 14 Sep 24 10:31 PDT | 14 Sep 24 10:31 PDT |
	| start   | -p kubernetes-upgrade-804000          | kubernetes-upgrade-804000 | jenkins | v1.34.0 | 14 Sep 24 10:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-804000          | kubernetes-upgrade-804000 | jenkins | v1.34.0 | 14 Sep 24 10:31 PDT | 14 Sep 24 10:32 PDT |
	| start   | -p kubernetes-upgrade-804000          | kubernetes-upgrade-804000 | jenkins | v1.34.0 | 14 Sep 24 10:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-804000          | kubernetes-upgrade-804000 | jenkins | v1.34.0 | 14 Sep 24 10:32 PDT | 14 Sep 24 10:32 PDT |
	| start   | -p stopped-upgrade-130000             | minikube                  | jenkins | v1.26.0 | 14 Sep 24 10:32 PDT | 14 Sep 24 10:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-130000 stop           | minikube                  | jenkins | v1.26.0 | 14 Sep 24 10:32 PDT | 14 Sep 24 10:33 PDT |
	| start   | -p stopped-upgrade-130000             | stopped-upgrade-130000    | jenkins | v1.34.0 | 14 Sep 24 10:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 10:33:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 10:33:00.943276    5189 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:33:00.943414    5189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:33:00.943418    5189 out.go:358] Setting ErrFile to fd 2...
	I0914 10:33:00.943421    5189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:33:00.943539    5189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:33:00.944552    5189 out.go:352] Setting JSON to false
	I0914 10:33:00.961802    5189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3743,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:33:00.961876    5189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:33:00.966731    5189 out.go:177] * [stopped-upgrade-130000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:33:00.974881    5189 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:33:00.974949    5189 notify.go:220] Checking for updates...
	I0914 10:33:00.981815    5189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:33:00.984830    5189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:33:00.987901    5189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:33:00.990882    5189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:33:00.993839    5189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:33:00.997125    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:33:01.000756    5189 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 10:33:01.003889    5189 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:33:01.007832    5189 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:33:01.014798    5189 start.go:297] selected driver: qemu2
	I0914 10:33:01.014804    5189 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:01.014853    5189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:33:01.017465    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:33:01.017500    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:33:01.017530    5189 start.go:340] cluster config:
	{Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:01.017586    5189 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:33:01.024844    5189 out.go:177] * Starting "stopped-upgrade-130000" primary control-plane node in "stopped-upgrade-130000" cluster
	I0914 10:33:01.028648    5189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:33:01.028664    5189 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 10:33:01.028671    5189 cache.go:56] Caching tarball of preloaded images
	I0914 10:33:01.028722    5189 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:33:01.028727    5189 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 10:33:01.028788    5189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/config.json ...
	I0914 10:33:01.029295    5189 start.go:360] acquireMachinesLock for stopped-upgrade-130000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:33:01.029327    5189 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "stopped-upgrade-130000"
	I0914 10:33:01.029335    5189 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:33:01.029341    5189 fix.go:54] fixHost starting: 
	I0914 10:33:01.029449    5189 fix.go:112] recreateIfNeeded on stopped-upgrade-130000: state=Stopped err=<nil>
	W0914 10:33:01.029457    5189 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:33:01.037768    5189 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-130000" ...
	I0914 10:32:58.917158    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:01.041745    5189 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:33:01.041818    5189 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50483-:22,hostfwd=tcp::50484-:2376,hostname=stopped-upgrade-130000 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/disk.qcow2
	I0914 10:33:01.089823    5189 main.go:141] libmachine: STDOUT: 
	I0914 10:33:01.089848    5189 main.go:141] libmachine: STDERR: 
	I0914 10:33:01.089854    5189 main.go:141] libmachine: Waiting for VM to start (ssh -p 50483 docker@127.0.0.1)...
	I0914 10:33:03.919371    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:03.919630    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:03.957766    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:03.957865    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:03.976670    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:03.976748    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:03.988240    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:03.988327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:03.998877    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:03.998959    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:04.009613    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:04.009686    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:04.020211    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:04.020279    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:04.029947    4633 logs.go:276] 0 containers: []
	W0914 10:33:04.029959    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:04.030031    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:04.048420    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:04.048441    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:04.048447    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:04.084841    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:04.084856    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:04.098830    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:04.098840    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:04.116827    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:04.116839    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:04.129060    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:04.129072    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:04.146272    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:04.146282    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:04.158023    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:04.158037    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:04.197491    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:04.197501    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:04.202377    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:04.202385    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:04.213898    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:04.213911    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:04.229149    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:04.229158    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:04.240177    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:04.240187    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:04.260718    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:04.260727    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:04.278069    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:04.278080    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:04.301182    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:04.301188    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:04.324683    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:04.324696    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:04.336007    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:04.336016    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:06.849626    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:11.851767    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:11.851941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:11.869583    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:11.869669    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:11.882634    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:11.882728    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:11.894920    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:11.895019    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:11.907331    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:11.907438    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:11.919937    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:11.920039    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:11.932805    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:11.932900    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:11.945935    4633 logs.go:276] 0 containers: []
	W0914 10:33:11.945950    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:11.946046    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:11.958308    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:11.958328    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:11.958335    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:11.980575    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:11.980589    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:12.024698    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:12.024719    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:12.029824    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:12.029836    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:12.043580    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:12.043591    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:12.057226    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:12.057236    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:12.075849    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:12.075861    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:12.117745    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:12.117759    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:12.143082    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:12.143103    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:12.159140    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:12.159154    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:12.178544    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:12.178567    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:12.191869    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:12.191881    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:12.218636    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:12.218654    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:12.232375    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:12.232393    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:12.254531    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:12.254544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:12.274745    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:12.274765    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:12.291698    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:12.291717    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:14.808187    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:19.810262    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:19.810646    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:19.838213    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:19.838357    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:19.856051    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:19.856159    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:19.869665    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:19.869759    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:19.882351    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:19.882438    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:19.892555    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:19.892643    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:19.902900    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:19.902982    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:19.916146    4633 logs.go:276] 0 containers: []
	W0914 10:33:19.916158    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:19.916228    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:19.926560    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:19.926583    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:19.926589    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:19.961073    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:19.961086    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:19.981657    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:19.981669    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:19.993327    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:19.993339    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:20.010394    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:20.010404    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:20.024334    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:20.024343    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:20.039330    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:20.039341    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:20.056868    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:20.056878    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:20.071541    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:20.071549    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:20.084298    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:20.084310    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:20.108355    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:20.108369    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:20.150241    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:20.150255    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:20.154780    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:20.154788    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:20.166968    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:20.166978    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:20.178827    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:20.178840    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:20.192069    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:20.192079    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:20.207613    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:20.207623    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:21.130429    5189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/config.json ...
	I0914 10:33:21.131019    5189 machine.go:93] provisionDockerMachine start ...
	I0914 10:33:21.131178    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.131558    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.131571    5189 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 10:33:21.221193    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 10:33:21.221222    5189 buildroot.go:166] provisioning hostname "stopped-upgrade-130000"
	I0914 10:33:21.221329    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.221569    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.221582    5189 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-130000 && echo "stopped-upgrade-130000" | sudo tee /etc/hostname
	I0914 10:33:21.306522    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-130000
	
	I0914 10:33:21.306603    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.306762    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.306776    5189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-130000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-130000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-130000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 10:33:21.377499    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 10:33:21.377510    5189 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19643-1079/.minikube CaCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19643-1079/.minikube}
	I0914 10:33:21.377526    5189 buildroot.go:174] setting up certificates
	I0914 10:33:21.377534    5189 provision.go:84] configureAuth start
	I0914 10:33:21.377541    5189 provision.go:143] copyHostCerts
	I0914 10:33:21.377612    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem, removing ...
	I0914 10:33:21.377623    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem
	I0914 10:33:21.377744    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem (1675 bytes)
	I0914 10:33:21.377928    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem, removing ...
	I0914 10:33:21.377931    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem
	I0914 10:33:21.377989    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem (1078 bytes)
	I0914 10:33:21.378101    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem, removing ...
	I0914 10:33:21.378104    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem
	I0914 10:33:21.378157    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem (1123 bytes)
	I0914 10:33:21.378245    5189 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-130000 san=[127.0.0.1 localhost minikube stopped-upgrade-130000]
	I0914 10:33:21.439185    5189 provision.go:177] copyRemoteCerts
	I0914 10:33:21.439228    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 10:33:21.439237    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:21.476387    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 10:33:21.483123    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 10:33:21.490003    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 10:33:21.497596    5189 provision.go:87] duration metric: took 120.053458ms to configureAuth
	I0914 10:33:21.497610    5189 buildroot.go:189] setting minikube options for container-runtime
	I0914 10:33:21.497736    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:33:21.497772    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.497861    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.497868    5189 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 10:33:21.564587    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 10:33:21.564601    5189 buildroot.go:70] root file system type: tmpfs
	I0914 10:33:21.564651    5189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 10:33:21.564719    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.564833    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.564867    5189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 10:33:21.635165    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 10:33:21.635222    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.635330    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.635340    5189 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 10:33:22.017309    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 10:33:22.017322    5189 machine.go:96] duration metric: took 886.330125ms to provisionDockerMachine
	I0914 10:33:22.017329    5189 start.go:293] postStartSetup for "stopped-upgrade-130000" (driver="qemu2")
	I0914 10:33:22.017336    5189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 10:33:22.017408    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 10:33:22.017417    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:22.056805    5189 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 10:33:22.058216    5189 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 10:33:22.058223    5189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/addons for local assets ...
	I0914 10:33:22.058518    5189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/files for local assets ...
	I0914 10:33:22.058668    5189 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem -> 16032.pem in /etc/ssl/certs
	I0914 10:33:22.058799    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 10:33:22.061432    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:33:22.068074    5189 start.go:296] duration metric: took 50.742209ms for postStartSetup
	I0914 10:33:22.068088    5189 fix.go:56] duration metric: took 21.039633666s for fixHost
	I0914 10:33:22.068129    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:22.068228    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:22.068232    5189 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 10:33:22.135582    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726335202.269895962
	
	I0914 10:33:22.135591    5189 fix.go:216] guest clock: 1726335202.269895962
	I0914 10:33:22.135598    5189 fix.go:229] Guest: 2024-09-14 10:33:22.269895962 -0700 PDT Remote: 2024-09-14 10:33:22.06809 -0700 PDT m=+21.149973417 (delta=201.805962ms)
	I0914 10:33:22.135610    5189 fix.go:200] guest clock delta is within tolerance: 201.805962ms
	I0914 10:33:22.135613    5189 start.go:83] releasing machines lock for "stopped-upgrade-130000", held for 21.107168583s
	I0914 10:33:22.135682    5189 ssh_runner.go:195] Run: cat /version.json
	I0914 10:33:22.135695    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:22.135682    5189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 10:33:22.135746    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	W0914 10:33:22.136395    5189 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50483: connect: connection refused
	I0914 10:33:22.136454    5189 retry.go:31] will retry after 350.599657ms: dial tcp [::1]:50483: connect: connection refused
	W0914 10:33:22.171051    5189 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 10:33:22.171113    5189 ssh_runner.go:195] Run: systemctl --version
	I0914 10:33:22.172889    5189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 10:33:22.174575    5189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 10:33:22.174599    5189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 10:33:22.177514    5189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 10:33:22.182497    5189 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 10:33:22.182505    5189 start.go:495] detecting cgroup driver to use...
	I0914 10:33:22.182583    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:33:22.189151    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 10:33:22.192667    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 10:33:22.195741    5189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 10:33:22.195768    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 10:33:22.198488    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:33:22.201713    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 10:33:22.204922    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:33:22.208176    5189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 10:33:22.211076    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 10:33:22.213898    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 10:33:22.217285    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 10:33:22.220873    5189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 10:33:22.223910    5189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 10:33:22.226445    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:22.313299    5189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 10:33:22.319493    5189 start.go:495] detecting cgroup driver to use...
	I0914 10:33:22.319566    5189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 10:33:22.324922    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:33:22.330464    5189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 10:33:22.340864    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:33:22.345257    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 10:33:22.349989    5189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 10:33:22.397495    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 10:33:22.403023    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:33:22.408204    5189 ssh_runner.go:195] Run: which cri-dockerd
	I0914 10:33:22.409371    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 10:33:22.412260    5189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 10:33:22.417183    5189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 10:33:22.495559    5189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 10:33:22.575028    5189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 10:33:22.575099    5189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 10:33:22.580240    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:22.655581    5189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:33:23.811822    5189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156273542s)
	I0914 10:33:23.811891    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 10:33:23.816337    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:33:23.820705    5189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 10:33:23.894215    5189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 10:33:23.975072    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:24.034352    5189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 10:33:24.040270    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:33:24.044760    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:24.123492    5189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 10:33:24.163206    5189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 10:33:24.163314    5189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 10:33:24.165844    5189 start.go:563] Will wait 60s for crictl version
	I0914 10:33:24.165889    5189 ssh_runner.go:195] Run: which crictl
	I0914 10:33:24.167244    5189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 10:33:24.181349    5189 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 10:33:24.181446    5189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:33:24.196989    5189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:33:24.217294    5189 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 10:33:24.217381    5189 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 10:33:24.218625    5189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 10:33:24.222075    5189 kubeadm.go:883] updating cluster {Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 10:33:24.222122    5189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:33:24.222171    5189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:33:24.237150    5189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:33:24.237158    5189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:33:24.237215    5189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:33:24.240619    5189 ssh_runner.go:195] Run: which lz4
	I0914 10:33:24.241892    5189 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 10:33:24.243160    5189 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 10:33:24.243170    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 10:33:25.169792    5189 docker.go:649] duration metric: took 927.994834ms to copy over tarball
	I0914 10:33:25.169857    5189 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 10:33:22.725364    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:26.324858    5189 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155035959s)
	I0914 10:33:26.324873    5189 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 10:33:26.341045    5189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:33:26.344609    5189 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 10:33:26.349726    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:26.427489    5189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:33:28.382311    5189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.954878959s)
	I0914 10:33:28.382507    5189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:33:28.395494    5189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:33:28.395502    5189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:33:28.395509    5189 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 10:33:28.407109    5189 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:28.407887    5189 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.409043    5189 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:28.409207    5189 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.410238    5189 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.410484    5189 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.411378    5189 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.412597    5189 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.412643    5189 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.412690    5189 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 10:33:28.413696    5189 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.414054    5189 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.415463    5189 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 10:33:28.415500    5189 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.416432    5189 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.417157    5189 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.832130    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.843579    5189 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 10:33:28.843604    5189 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.843671    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.844017    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.856689    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.861527    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.862398    5189 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 10:33:28.862415    5189 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.862451    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.862540    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0914 10:33:28.865072    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 10:33:28.869951    5189 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 10:33:28.869972    5189 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.870037    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.884139    5189 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 10:33:28.884158    5189 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.884235    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.886532    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 10:33:28.895283    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0914 10:33:28.895407    5189 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 10:33:28.895423    5189 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 10:33:28.895479    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 10:33:28.905314    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 10:33:28.910842    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 10:33:28.910954    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 10:33:28.912646    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 10:33:28.912658    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 10:33:28.920545    5189 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 10:33:28.920554    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0914 10:33:28.923333    5189 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 10:33:28.923487    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.936160    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.961876    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 10:33:28.961924    5189 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 10:33:28.961941    5189 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.961971    5189 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 10:33:28.961981    5189 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.962005    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.962016    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.971829    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 10:33:28.973083    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 10:33:28.973208    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:33:28.974818    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 10:33:28.974828    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 10:33:29.016322    5189 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:33:29.016334    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0914 10:33:29.052491    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0914 10:33:29.248435    5189 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 10:33:29.248667    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.265962    5189 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 10:33:29.265990    5189 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.266086    5189 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.284069    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 10:33:29.284415    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:33:29.285936    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0914 10:33:29.285954    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0914 10:33:29.317744    5189 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:33:29.317757    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0914 10:33:29.554246    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 10:33:29.554281    5189 cache_images.go:92] duration metric: took 1.158814875s to LoadCachedImages
	W0914 10:33:29.554318    5189 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0914 10:33:29.554328    5189 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 10:33:29.554379    5189 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-130000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 10:33:29.554462    5189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 10:33:29.567967    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:33:29.567977    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:33:29.567982    5189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 10:33:29.567990    5189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-130000 NodeName:stopped-upgrade-130000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 10:33:29.568052    5189 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-130000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 10:33:29.568104    5189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 10:33:29.571280    5189 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 10:33:29.571312    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 10:33:29.573934    5189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 10:33:29.578954    5189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 10:33:29.583670    5189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 10:33:29.589248    5189 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 10:33:29.590469    5189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 10:33:29.593772    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:29.654335    5189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:33:29.664452    5189 certs.go:68] Setting up /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000 for IP: 10.0.2.15
	I0914 10:33:29.664461    5189 certs.go:194] generating shared ca certs ...
	I0914 10:33:29.664470    5189 certs.go:226] acquiring lock for ca certs: {Name:mk7a785a7c5445527aceab92dcaa64cad76e8086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.664627    5189 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key
	I0914 10:33:29.664679    5189 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key
	I0914 10:33:29.664686    5189 certs.go:256] generating profile certs ...
	I0914 10:33:29.664765    5189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key
	I0914 10:33:29.664783    5189 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c
	I0914 10:33:29.664792    5189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 10:33:29.849503    5189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c ...
	I0914 10:33:29.849527    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c: {Name:mkf3e51e13810059867d19fbec340487cd9b4a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.851226    5189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c ...
	I0914 10:33:29.851238    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c: {Name:mke6a4e61bc20a372cdee59dad6d1444a3dde507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.851386    5189 certs.go:381] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt
	I0914 10:33:29.851533    5189 certs.go:385] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key
	I0914 10:33:29.851696    5189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.key
	I0914 10:33:29.851836    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem (1338 bytes)
	W0914 10:33:29.851867    5189 certs.go:480] ignoring /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603_empty.pem, impossibly tiny 0 bytes
	I0914 10:33:29.851874    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 10:33:29.851894    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem (1078 bytes)
	I0914 10:33:29.851912    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem (1123 bytes)
	I0914 10:33:29.851930    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem (1675 bytes)
	I0914 10:33:29.852260    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:33:29.852635    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 10:33:29.860065    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 10:33:29.866618    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 10:33:29.873540    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 10:33:29.880925    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 10:33:29.888608    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 10:33:29.895778    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 10:33:29.902366    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 10:33:29.909241    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /usr/share/ca-certificates/16032.pem (1708 bytes)
	I0914 10:33:29.916511    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 10:33:29.923420    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem --> /usr/share/ca-certificates/1603.pem (1338 bytes)
	I0914 10:33:29.930052    5189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 10:33:29.935184    5189 ssh_runner.go:195] Run: openssl version
	I0914 10:33:29.937050    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 10:33:29.940401    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.941805    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.941829    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.943611    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 10:33:29.946304    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1603.pem && ln -fs /usr/share/ca-certificates/1603.pem /etc/ssl/certs/1603.pem"
	I0914 10:33:29.949402    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.950810    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 16:59 /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.950845    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.952557    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1603.pem /etc/ssl/certs/51391683.0"
	I0914 10:33:29.955739    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16032.pem && ln -fs /usr/share/ca-certificates/16032.pem /etc/ssl/certs/16032.pem"
	I0914 10:33:29.958507    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.959911    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 16:59 /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.959932    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.961740    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16032.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 10:33:29.965205    5189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 10:33:29.966752    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 10:33:29.968556    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 10:33:29.970402    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 10:33:29.972287    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 10:33:29.974195    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 10:33:29.975990    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 10:33:29.977716    5189 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:29.977789    5189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:33:29.988055    5189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 10:33:29.991121    5189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 10:33:29.991129    5189 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 10:33:29.991157    5189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 10:33:29.994594    5189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:33:29.994901    5189 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-130000" does not appear in /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:33:29.995026    5189 kubeconfig.go:62] /Users/jenkins/minikube-integration/19643-1079/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-130000" cluster setting kubeconfig missing "stopped-upgrade-130000" context setting]
	I0914 10:33:29.995222    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.995731    5189 kapi.go:59] client config for stopped-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:33:29.996066    5189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 10:33:29.998836    5189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-130000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 10:33:29.998844    5189 kubeadm.go:1160] stopping kube-system containers ...
	I0914 10:33:29.998892    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:33:30.010033    5189 docker.go:483] Stopping containers: [bc0eb1fe6478 f2165e8cce8d ea8a24c9014a ccbe87febee7 bedcedf78c08 536e693fe537 5b995c5ba76a 8fe86898c11f]
	I0914 10:33:30.010116    5189 ssh_runner.go:195] Run: docker stop bc0eb1fe6478 f2165e8cce8d ea8a24c9014a ccbe87febee7 bedcedf78c08 536e693fe537 5b995c5ba76a 8fe86898c11f
	I0914 10:33:30.021947    5189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 10:33:30.027838    5189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:33:30.030639    5189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:33:30.030644    5189 kubeadm.go:157] found existing configuration files:
	
	I0914 10:33:30.030670    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf
	I0914 10:33:30.033276    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:33:30.033304    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:33:30.036382    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf
	I0914 10:33:30.038888    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:33:30.038919    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:33:30.041478    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf
	I0914 10:33:30.044447    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:33:30.044482    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:33:30.047236    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf
	I0914 10:33:30.049734    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:33:30.049757    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:33:30.052724    5189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:33:30.055641    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.079674    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.547604    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.679675    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.702283    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.730054    5189 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:33:30.730129    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:27.727384    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:27.727495    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:27.739966    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:27.740051    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:27.750721    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:27.750801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:27.765982    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:27.766062    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:27.776988    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:27.777075    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:27.787568    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:27.787652    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:27.798344    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:27.798423    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:27.809209    4633 logs.go:276] 0 containers: []
	W0914 10:33:27.809227    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:27.809305    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:27.819732    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:27.819752    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:27.819756    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:27.831837    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:27.831848    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:27.855471    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:27.855500    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:27.895774    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:27.895781    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:27.908096    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:27.908107    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:27.920185    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:27.920196    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:27.937793    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:27.937804    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:27.955233    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:27.955244    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:27.966238    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:27.966248    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:27.981769    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:27.981780    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:27.992722    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:27.992734    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:28.005018    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:28.005028    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:28.019019    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:28.019032    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:28.035225    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:28.035235    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:28.039527    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:28.039533    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:28.076142    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:28.076152    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:28.097575    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:28.097586    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:30.621028    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:31.231837    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:31.732166    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:31.736805    5189 api_server.go:72] duration metric: took 1.006793584s to wait for apiserver process to appear ...
	I0914 10:33:31.736816    5189 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:33:31.736825    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:35.623067    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:35.623179    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:35.634330    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:35.634418    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:35.645167    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:35.645252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:35.656619    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:35.656701    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:35.667268    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:35.667359    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:35.678398    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:35.678471    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:35.695335    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:35.695415    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:35.705268    4633 logs.go:276] 0 containers: []
	W0914 10:33:35.705281    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:35.705349    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:35.715622    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:35.715638    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:35.715644    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:35.729373    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:35.729383    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:35.746863    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:35.746878    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:35.751172    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:35.751179    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:35.763201    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:35.763211    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:35.781347    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:35.781368    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:35.804778    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:35.804794    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:35.817493    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:35.817505    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:35.852124    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:35.852137    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:35.873987    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:35.874001    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:35.897550    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:35.897561    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:35.909581    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:35.909592    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:35.921676    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:35.921688    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:35.933209    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:35.933220    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:35.972587    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:35.972596    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:35.986235    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:35.986246    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:35.998983    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:35.998995    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:36.738742    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:36.738785    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:38.512851    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:41.738897    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:41.738981    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:43.513108    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:43.513376    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:43.537936    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:43.538067    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:43.554185    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:43.554275    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:43.570665    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:43.570733    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:43.581322    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:43.581419    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:43.591590    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:43.591679    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:43.606974    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:43.607058    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:43.617032    4633 logs.go:276] 0 containers: []
	W0914 10:33:43.617044    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:43.617123    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:43.627939    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:43.627957    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:43.627962    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:43.639695    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:43.639709    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:43.652233    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:43.652243    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:43.656541    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:43.656548    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:43.676733    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:43.676746    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:43.690944    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:43.690959    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:43.702678    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:43.702691    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:43.717551    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:43.717564    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:43.757761    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:43.757774    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:43.769439    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:43.769450    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:43.781196    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:43.781209    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:43.803684    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:43.803691    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:43.815725    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:43.815734    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:43.851155    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:43.851164    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:43.865963    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:43.865973    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:43.883862    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:43.883874    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:43.899116    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:43.899128    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:46.426023    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:46.739440    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:46.739515    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:51.428647    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:51.428897    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:51.447584    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:51.447698    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:51.460898    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:51.460980    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:51.475729    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:51.475803    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:51.486191    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:51.486270    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:51.497833    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:51.497908    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:51.508277    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:51.508364    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:51.518251    4633 logs.go:276] 0 containers: []
	W0914 10:33:51.518263    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:51.518337    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:51.528783    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:51.528800    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:51.528805    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:51.543642    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:51.543654    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:51.548402    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:51.548410    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:51.571576    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:51.571595    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:33:51.589262    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:51.589277    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:51.601043    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:51.601053    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:51.635911    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:51.635922    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:51.647872    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:51.647883    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:51.665096    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:51.665107    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:51.676791    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:51.676803    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:51.691192    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:51.691202    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:51.704912    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:51.704928    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:51.723840    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:51.723854    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:51.747634    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:51.747642    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:51.789545    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:51.789556    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:51.804400    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:51.804410    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:51.816878    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:51.816889    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:51.739982    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:51.740003    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:54.335877    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:56.740593    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:56.740729    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:59.337976    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:59.338102    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:33:59.350109    4633 logs.go:276] 2 containers: [cc9f670924c6 ae03d68bb317]
	I0914 10:33:59.350205    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:33:59.361032    4633 logs.go:276] 2 containers: [07d594fbccfe 8076dfae9f44]
	I0914 10:33:59.361117    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:33:59.371498    4633 logs.go:276] 1 containers: [4bea8c7649df]
	I0914 10:33:59.371577    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:33:59.381962    4633 logs.go:276] 2 containers: [1102ad44a942 2320ee4845a9]
	I0914 10:33:59.382055    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:33:59.392768    4633 logs.go:276] 1 containers: [e4616e40153f]
	I0914 10:33:59.392849    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:33:59.404287    4633 logs.go:276] 2 containers: [da12d5ff26bb 24ffba65710c]
	I0914 10:33:59.404371    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:33:59.415243    4633 logs.go:276] 0 containers: []
	W0914 10:33:59.415256    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:33:59.415331    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:33:59.425631    4633 logs.go:276] 2 containers: [80bfcc2ce813 4a27e6945f3c]
	I0914 10:33:59.425648    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:33:59.425655    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:33:59.467519    4633 logs.go:123] Gathering logs for coredns [4bea8c7649df] ...
	I0914 10:33:59.467530    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bea8c7649df"
	I0914 10:33:59.478525    4633 logs.go:123] Gathering logs for kube-controller-manager [da12d5ff26bb] ...
	I0914 10:33:59.478535    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da12d5ff26bb"
	I0914 10:33:59.500936    4633 logs.go:123] Gathering logs for storage-provisioner [4a27e6945f3c] ...
	I0914 10:33:59.500948    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a27e6945f3c"
	I0914 10:33:59.514792    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:33:59.514803    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:33:59.526426    4633 logs.go:123] Gathering logs for kube-apiserver [ae03d68bb317] ...
	I0914 10:33:59.526436    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae03d68bb317"
	I0914 10:33:59.552832    4633 logs.go:123] Gathering logs for kube-proxy [e4616e40153f] ...
	I0914 10:33:59.552847    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4616e40153f"
	I0914 10:33:59.565796    4633 logs.go:123] Gathering logs for kube-controller-manager [24ffba65710c] ...
	I0914 10:33:59.565806    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ffba65710c"
	I0914 10:33:59.576904    4633 logs.go:123] Gathering logs for kube-scheduler [1102ad44a942] ...
	I0914 10:33:59.576915    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1102ad44a942"
	I0914 10:33:59.588626    4633 logs.go:123] Gathering logs for kube-scheduler [2320ee4845a9] ...
	I0914 10:33:59.588639    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2320ee4845a9"
	I0914 10:33:59.603597    4633 logs.go:123] Gathering logs for storage-provisioner [80bfcc2ce813] ...
	I0914 10:33:59.603608    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80bfcc2ce813"
	I0914 10:33:59.615267    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:33:59.615277    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:33:59.637775    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:33:59.637783    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:33:59.642031    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:33:59.642038    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:33:59.678279    4633 logs.go:123] Gathering logs for kube-apiserver [cc9f670924c6] ...
	I0914 10:33:59.678288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9f670924c6"
	I0914 10:33:59.696465    4633 logs.go:123] Gathering logs for etcd [07d594fbccfe] ...
	I0914 10:33:59.696476    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07d594fbccfe"
	I0914 10:33:59.710871    4633 logs.go:123] Gathering logs for etcd [8076dfae9f44] ...
	I0914 10:33:59.710885    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8076dfae9f44"
	I0914 10:34:02.230453    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:01.742494    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:01.742560    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:07.231868    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:07.231940    4633 kubeadm.go:597] duration metric: took 4m4.379932542s to restartPrimaryControlPlane
	W0914 10:34:07.232000    4633 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 10:34:07.232028    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 10:34:08.201720    4633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 10:34:08.206741    4633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:34:08.209508    4633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:34:08.212524    4633 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:34:08.212530    4633 kubeadm.go:157] found existing configuration files:
	
	I0914 10:34:08.212562    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf
	I0914 10:34:08.215139    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:34:08.215170    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:34:08.217549    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf
	I0914 10:34:08.220576    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:34:08.220603    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:34:08.223632    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf
	I0914 10:34:08.226075    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:34:08.226101    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:34:08.228838    4633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf
	I0914 10:34:08.231778    4633 kubeadm.go:163] "https://control-plane.minikube.internal:50278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50278 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:34:08.231806    4633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:34:08.234289    4633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 10:34:08.251846    4633 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 10:34:08.251875    4633 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 10:34:08.310334    4633 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 10:34:08.310425    4633 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 10:34:08.310484    4633 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 10:34:08.362276    4633 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 10:34:08.366543    4633 out.go:235]   - Generating certificates and keys ...
	I0914 10:34:08.366580    4633 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 10:34:08.366610    4633 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 10:34:08.366649    4633 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 10:34:08.366679    4633 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 10:34:08.366744    4633 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 10:34:08.366775    4633 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 10:34:08.366827    4633 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 10:34:08.366871    4633 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 10:34:08.366916    4633 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 10:34:08.366952    4633 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 10:34:08.366972    4633 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 10:34:08.367001    4633 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 10:34:08.619131    4633 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 10:34:08.795955    4633 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 10:34:09.003335    4633 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 10:34:09.077024    4633 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 10:34:09.104852    4633 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 10:34:09.105159    4633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 10:34:09.105184    4633 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 10:34:09.209920    4633 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 10:34:06.744071    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:06.744136    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:09.214167    4633 out.go:235]   - Booting up control plane ...
	I0914 10:34:09.214218    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 10:34:09.214254    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 10:34:09.214288    4633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 10:34:09.214376    4633 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 10:34:09.214486    4633 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 10:34:13.213611    4633 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001848 seconds
	I0914 10:34:13.213675    4633 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 10:34:13.217249    4633 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 10:34:13.735323    4633 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 10:34:13.735598    4633 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-158000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 10:34:14.239568    4633 kubeadm.go:310] [bootstrap-token] Using token: tndwzs.bc88b49vrocmhecw
	I0914 10:34:14.245910    4633 out.go:235]   - Configuring RBAC rules ...
	I0914 10:34:14.245962    4633 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 10:34:14.246005    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 10:34:14.248170    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 10:34:14.253401    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 10:34:14.254249    4633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 10:34:14.255120    4633 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 10:34:14.258399    4633 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 10:34:14.442765    4633 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 10:34:14.643557    4633 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 10:34:14.644097    4633 kubeadm.go:310] 
	I0914 10:34:14.644128    4633 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 10:34:14.644132    4633 kubeadm.go:310] 
	I0914 10:34:14.644169    4633 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 10:34:14.644173    4633 kubeadm.go:310] 
	I0914 10:34:14.644185    4633 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 10:34:14.644214    4633 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 10:34:14.644247    4633 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 10:34:14.644255    4633 kubeadm.go:310] 
	I0914 10:34:14.644283    4633 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 10:34:14.644298    4633 kubeadm.go:310] 
	I0914 10:34:14.644325    4633 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 10:34:14.644329    4633 kubeadm.go:310] 
	I0914 10:34:14.644361    4633 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 10:34:14.644397    4633 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 10:34:14.644440    4633 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 10:34:14.644445    4633 kubeadm.go:310] 
	I0914 10:34:14.644493    4633 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 10:34:14.644559    4633 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 10:34:14.644563    4633 kubeadm.go:310] 
	I0914 10:34:14.644606    4633 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tndwzs.bc88b49vrocmhecw \
	I0914 10:34:14.644668    4633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 \
	I0914 10:34:14.644687    4633 kubeadm.go:310] 	--control-plane 
	I0914 10:34:14.644691    4633 kubeadm.go:310] 
	I0914 10:34:14.644763    4633 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 10:34:14.644769    4633 kubeadm.go:310] 
	I0914 10:34:14.644824    4633 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tndwzs.bc88b49vrocmhecw \
	I0914 10:34:14.644884    4633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 
	I0914 10:34:14.644936    4633 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 10:34:14.644942    4633 cni.go:84] Creating CNI manager for ""
	I0914 10:34:14.644949    4633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:34:14.649463    4633 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 10:34:14.656422    4633 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 10:34:14.659433    4633 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 10:34:14.663969    4633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 10:34:14.664029    4633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 10:34:14.664038    4633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-158000 minikube.k8s.io/updated_at=2024_09_14T10_34_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=running-upgrade-158000 minikube.k8s.io/primary=true
	I0914 10:34:14.714455    4633 kubeadm.go:1113] duration metric: took 50.468375ms to wait for elevateKubeSystemPrivileges
	I0914 10:34:14.714492    4633 ops.go:34] apiserver oom_adj: -16
	I0914 10:34:14.714497    4633 kubeadm.go:394] duration metric: took 4m11.877349167s to StartCluster
	I0914 10:34:14.714507    4633 settings.go:142] acquiring lock: {Name:mk7db576f28fda26cf1d7d854618889d7d4f8a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:34:14.714603    4633 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:34:14.715004    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:34:14.715232    4633 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:34:14.715244    4633 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 10:34:14.715277    4633 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-158000"
	I0914 10:34:14.715283    4633 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-158000"
	I0914 10:34:14.715288    4633 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-158000"
	W0914 10:34:14.715292    4633 addons.go:243] addon storage-provisioner should already be in state true
	I0914 10:34:14.715297    4633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-158000"
	I0914 10:34:14.715304    4633 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0914 10:34:14.715339    4633 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:34:14.716126    4633 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102159800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:34:14.716248    4633 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-158000"
	W0914 10:34:14.716253    4633 addons.go:243] addon default-storageclass should already be in state true
	I0914 10:34:14.716260    4633 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0914 10:34:14.719434    4633 out.go:177] * Verifying Kubernetes components...
	I0914 10:34:14.719771    4633 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 10:34:14.723693    4633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 10:34:14.723707    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:34:14.727401    4633 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:34:11.745964    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:11.745988    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:14.731536    4633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:34:14.734438    4633 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:34:14.734444    4633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 10:34:14.734450    4633 sshutil.go:53] new ssh client: &{IP:localhost Port:50246 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0914 10:34:14.828362    4633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:34:14.834107    4633 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:34:14.834161    4633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:34:14.837945    4633 api_server.go:72] duration metric: took 122.707834ms to wait for apiserver process to appear ...
	I0914 10:34:14.837954    4633 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:34:14.837961    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:14.856680    4633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:34:14.924949    4633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 10:34:15.211214    4633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 10:34:15.211227    4633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 10:34:16.747980    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:16.748017    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:19.839885    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:19.839936    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:21.750191    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:21.750288    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:24.840110    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:24.840142    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:26.751519    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:26.751539    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:29.840289    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:29.840323    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:31.753519    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:31.753671    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:31.769519    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:31.769609    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:31.782229    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:31.782325    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:31.793288    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:31.793361    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:31.803409    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:31.803484    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:31.813553    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:31.813638    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:31.825398    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:31.825483    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:31.835618    5189 logs.go:276] 0 containers: []
	W0914 10:34:31.835632    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:31.835702    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:31.846208    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:31.846224    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:31.846229    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:31.887445    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:31.887457    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:31.902250    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:31.902261    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:31.916245    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:31.916261    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:31.930365    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:31.930374    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:31.942268    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:31.942280    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:31.980733    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:31.980745    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:32.059310    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:32.059326    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:32.074472    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:32.074483    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:32.089157    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:32.089167    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:32.100486    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:32.100500    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:32.115148    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:32.115159    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:32.119469    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:32.119476    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:32.131051    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:32.131061    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:32.142531    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:32.142542    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:32.162743    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:32.162753    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:32.174270    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:32.174280    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:34.700269    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:34.840572    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:34.840604    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:39.702256    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:39.702418    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:39.715234    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:39.715330    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:39.726447    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:39.726535    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:39.737256    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:39.737339    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:39.748438    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:39.748533    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:39.759190    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:39.759280    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:39.769873    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:39.769954    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:39.780714    5189 logs.go:276] 0 containers: []
	W0914 10:34:39.780725    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:39.780794    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:39.791505    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:39.791524    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:39.791529    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:39.816681    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:39.816688    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:39.830365    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:39.830379    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:39.868043    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:39.868060    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:39.881977    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:39.881988    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:39.896885    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:39.896899    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:39.908440    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:39.908451    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:39.927856    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:39.927871    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:39.939951    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:39.939964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:39.956283    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:39.956299    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:39.995998    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:39.996008    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:40.007903    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:40.007913    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:40.021476    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:40.021487    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:40.033329    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:40.033338    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:40.072445    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:40.072455    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:40.076458    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:40.076465    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:40.087361    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:40.087373    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:39.841021    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:39.841085    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:44.841647    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:44.841668    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 10:34:45.211423    4633 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 10:34:45.215754    4633 out.go:177] * Enabled addons: storage-provisioner
	I0914 10:34:42.603405    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:45.222651    4633 addons.go:510] duration metric: took 30.508691458s for enable addons: enabled=[storage-provisioner]
	I0914 10:34:47.604951    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:47.605308    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:47.632173    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:47.632330    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:47.652058    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:47.652145    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:47.665530    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:47.665621    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:47.677009    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:47.677091    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:47.687676    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:47.687756    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:47.698609    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:47.698686    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:47.708999    5189 logs.go:276] 0 containers: []
	W0914 10:34:47.709011    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:47.709085    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:47.723969    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:47.723985    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:47.723990    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:47.735535    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:47.735545    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:47.754054    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:47.754065    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:47.779307    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:47.779318    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:47.818534    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:47.818547    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:47.830534    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:47.830546    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:47.848587    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:47.848598    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:47.867505    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:47.867516    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:47.882656    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:47.882666    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:47.895055    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:47.895068    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:47.907835    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:47.907845    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:47.919527    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:47.919537    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:47.957446    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:47.957455    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:47.961783    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:47.961790    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:47.996611    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:47.996624    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:48.012956    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:48.012967    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:48.027023    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:48.027036    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:50.541507    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:49.842357    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:49.842381    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:55.543681    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:55.543852    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:55.555180    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:55.555259    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:55.565804    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:55.565889    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:55.576241    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:55.576326    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:55.586839    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:55.586912    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:55.596914    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:55.596990    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:55.607251    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:55.607328    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:55.617285    5189 logs.go:276] 0 containers: []
	W0914 10:34:55.617305    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:55.617378    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:55.627731    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:55.627749    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:55.627755    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:55.639440    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:55.639452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:55.656727    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:55.656737    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:55.669504    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:55.669518    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:55.693752    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:55.693761    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:55.698333    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:55.698342    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:55.712187    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:55.712197    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:55.727078    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:55.727087    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:55.751817    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:55.751826    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:55.762680    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:55.762691    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:55.774301    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:55.774310    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:55.811751    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:55.811765    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:55.851332    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:55.851343    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:55.865673    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:55.865686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:55.880235    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:55.880245    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:55.923496    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:55.923506    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:55.934450    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:55.934462    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:54.843037    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:54.843075    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:58.446221    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:59.844566    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:59.844608    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:03.448717    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:03.449090    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:03.480605    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:03.480753    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:03.500255    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:03.500357    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:03.514071    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:03.514168    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:03.526125    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:03.526210    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:03.536803    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:03.536888    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:03.547182    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:03.547258    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:03.556927    5189 logs.go:276] 0 containers: []
	W0914 10:35:03.556941    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:03.557012    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:03.567959    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:03.567978    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:03.567984    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:03.584303    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:03.584313    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:03.598986    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:03.598994    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:03.613192    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:03.613204    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:03.627369    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:03.627382    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:03.641395    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:03.641405    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:03.659065    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:03.659074    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:03.670720    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:03.670735    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:03.682245    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:03.682255    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:03.706340    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:03.706349    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:03.718047    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:03.718057    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:03.753963    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:03.753975    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:03.766562    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:03.766573    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:03.804423    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:03.804440    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:03.808964    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:03.808979    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:03.848378    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:03.848391    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:03.859821    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:03.859833    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:04.846449    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:04.846495    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:06.373673    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:09.848505    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:09.848529    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:11.375840    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:11.376120    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:11.401222    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:11.401372    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:11.417944    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:11.418046    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:11.430870    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:11.430940    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:11.442269    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:11.442339    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:11.453257    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:11.453329    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:11.464071    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:11.464133    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:11.478374    5189 logs.go:276] 0 containers: []
	W0914 10:35:11.478387    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:11.478456    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:11.489054    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:11.489070    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:11.489076    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:11.503164    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:11.503177    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:11.514390    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:11.514402    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:11.526224    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:11.526236    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:11.538812    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:11.538827    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:11.575605    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:11.575614    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:11.579630    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:11.579639    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:11.594556    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:11.594565    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:11.606987    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:11.606998    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:11.627961    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:11.627970    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:11.642285    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:11.642300    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:11.698676    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:11.698688    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:11.737609    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:11.737619    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:11.752113    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:11.752125    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:11.766893    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:11.766904    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:11.778238    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:11.778249    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:11.802695    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:11.802708    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:14.316669    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:14.850503    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:14.850611    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:14.862421    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:14.862510    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:14.873262    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:14.873336    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:14.883745    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:14.883833    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:14.893909    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:14.893983    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:14.904472    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:14.904555    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:14.915362    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:14.915454    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:14.925683    4633 logs.go:276] 0 containers: []
	W0914 10:35:14.925694    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:14.925764    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:14.935732    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:14.935745    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:14.935751    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:14.950730    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:14.950741    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:14.962277    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:14.962286    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:14.973725    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:14.973736    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:15.007242    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:15.007257    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:15.021790    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:15.021806    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:15.035567    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:15.035577    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:15.047753    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:15.047765    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:15.065450    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:15.065459    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:15.077287    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:15.077297    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:15.111430    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:15.111441    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:15.116430    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:15.116437    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:15.141121    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:15.141129    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:19.318952    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:19.319152    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:19.335449    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:19.335547    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:19.352032    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:19.352119    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:19.362643    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:19.362734    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:19.373178    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:19.373265    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:19.383502    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:19.383584    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:19.394072    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:19.394166    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:19.407334    5189 logs.go:276] 0 containers: []
	W0914 10:35:19.407347    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:19.407424    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:19.418212    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:19.418238    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:19.418244    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:19.455949    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:19.455959    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:19.469874    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:19.469885    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:19.487998    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:19.488010    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:19.500133    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:19.500143    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:19.538190    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:19.538198    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:19.542579    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:19.542589    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:19.553452    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:19.553465    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:19.570946    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:19.570955    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:19.582001    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:19.582012    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:19.607029    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:19.607043    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:19.652980    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:19.652991    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:19.670849    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:19.670861    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:19.694407    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:19.694423    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:19.712751    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:19.712761    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:19.729866    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:19.729875    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:19.741307    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:19.741316    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:17.654075    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:22.255994    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:22.656088    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:22.656252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:22.669287    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:22.669389    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:22.680885    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:22.680974    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:22.691550    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:22.691637    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:22.702094    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:22.702163    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:22.712554    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:22.712639    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:22.722931    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:22.723016    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:22.733295    4633 logs.go:276] 0 containers: []
	W0914 10:35:22.733306    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:22.733374    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:22.745318    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:22.745335    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:22.745341    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:22.756277    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:22.756287    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:22.760903    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:22.760912    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:22.795250    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:22.795262    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:22.816811    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:22.816822    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:22.834661    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:22.834673    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:22.849953    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:22.849970    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:22.862179    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:22.862189    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:22.873951    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:22.873962    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:22.898983    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:22.898996    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:22.933697    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:22.933705    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:22.947288    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:22.947299    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:22.958595    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:22.958606    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:25.471986    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:27.258272    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:27.258550    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:27.282496    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:27.282653    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:27.298438    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:27.298546    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:27.311647    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:27.311745    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:27.322261    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:27.322349    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:27.333151    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:27.333230    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:27.344112    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:27.344199    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:27.354267    5189 logs.go:276] 0 containers: []
	W0914 10:35:27.354283    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:27.354355    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:27.370290    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:27.370315    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:27.370320    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:27.374752    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:27.374758    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:27.389157    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:27.389168    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:27.414225    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:27.414234    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:27.425918    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:27.425928    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:27.439129    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:27.439142    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:27.451038    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:27.451050    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:27.485548    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:27.485560    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:27.504757    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:27.504773    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:27.545404    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:27.545415    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:27.556775    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:27.556785    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:27.569352    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:27.569367    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:27.583504    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:27.583514    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:27.598235    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:27.598246    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:27.617554    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:27.617563    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:27.656626    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:27.656635    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:27.668140    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:27.668153    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:30.181729    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:30.474151    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:30.474580    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:30.524499    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:30.524634    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:30.540104    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:30.540215    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:30.553714    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:30.553808    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:30.564291    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:30.564369    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:30.574567    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:30.574649    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:30.585235    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:30.585324    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:30.595814    4633 logs.go:276] 0 containers: []
	W0914 10:35:30.595823    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:30.595890    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:30.606062    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:30.606078    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:30.606083    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:30.617635    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:30.617646    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:30.652323    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:30.652333    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:30.687063    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:30.687077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:30.699032    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:30.699043    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:30.714050    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:30.714063    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:30.731560    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:30.731571    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:30.755802    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:30.755822    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:30.767561    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:30.767573    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:30.772511    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:30.772521    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:30.787220    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:30.787231    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:30.804462    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:30.804472    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:30.816270    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:30.816280    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:35.183872    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:35.184091    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:35.200275    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:35.200378    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:35.213844    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:35.213939    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:35.232765    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:35.232848    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:35.252423    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:35.252512    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:35.268125    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:35.268215    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:35.278898    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:35.278982    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:35.289654    5189 logs.go:276] 0 containers: []
	W0914 10:35:35.289664    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:35.289728    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:35.300413    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:35.300432    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:35.300438    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:35.323417    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:35.323428    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:35.359246    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:35.359256    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:35.397081    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:35.397092    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:35.414458    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:35.414468    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:35.427626    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:35.427637    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:35.439160    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:35.439172    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:35.450419    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:35.450431    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:35.454918    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:35.454927    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:35.468668    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:35.468681    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:35.488363    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:35.488376    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:35.500487    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:35.500497    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:35.511704    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:35.511715    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:35.525420    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:35.525430    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:35.562441    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:35.562449    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:35.576084    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:35.576095    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:35.587897    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:35.587908    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:33.330291    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:38.101478    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:38.332906    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:38.333443    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:38.373239    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:38.373409    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:38.399591    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:38.399710    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:38.422398    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:38.422489    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:38.433664    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:38.433748    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:38.444775    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:38.444858    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:38.455308    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:38.455384    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:38.465822    4633 logs.go:276] 0 containers: []
	W0914 10:35:38.465833    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:38.465909    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:38.476131    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:38.476146    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:38.476152    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:38.511339    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:38.511351    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:38.526250    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:38.526259    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:38.541484    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:38.541493    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:38.552871    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:38.552881    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:38.568218    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:38.568227    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:38.591255    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:38.591261    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:38.602874    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:38.602890    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:38.607611    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:38.607620    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:38.641600    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:38.641611    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:38.654747    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:38.654757    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:38.666451    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:38.666461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:38.683676    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:38.683690    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:41.197556    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:43.103715    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:43.103903    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:43.120069    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:43.120176    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:43.132528    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:43.132618    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:43.143036    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:43.143130    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:43.153861    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:43.153943    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:43.164402    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:43.164488    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:43.174632    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:43.174712    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:43.184984    5189 logs.go:276] 0 containers: []
	W0914 10:35:43.184995    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:43.185069    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:43.195736    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:43.195753    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:43.195759    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:43.199868    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:43.199878    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:43.233975    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:43.233985    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:43.246157    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:43.246169    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:43.261436    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:43.261446    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:43.272831    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:43.272840    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:43.286748    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:43.286763    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:43.324197    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:43.324208    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:43.338404    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:43.338416    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:43.349719    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:43.349732    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:43.373163    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:43.373171    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:43.384826    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:43.384841    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:43.421482    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:43.421489    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:43.435559    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:43.435570    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:43.446675    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:43.446686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:43.461094    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:43.461104    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:43.479193    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:43.479202    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:46.198910    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:46.199087    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:46.214346    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:46.214453    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:46.226241    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:46.226327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:46.236714    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:46.236801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:46.246688    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:46.246760    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:46.257507    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:46.257598    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:46.268964    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:46.269046    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:46.279457    4633 logs.go:276] 0 containers: []
	W0914 10:35:46.279472    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:46.279548    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:46.289913    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:46.289934    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:46.289939    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:46.323082    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:46.323090    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:46.334509    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:46.334518    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:46.347916    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:46.347925    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:46.367903    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:46.367910    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:46.372556    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:46.372563    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:46.406537    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:46.406548    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:46.421259    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:46.421271    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:46.438679    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:46.438689    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:46.450534    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:46.450544    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:46.473768    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:46.473778    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:46.485147    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:46.485157    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:46.509135    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:46.509145    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:45.992373    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:49.022944    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:50.994509    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:50.994667    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:51.006033    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:51.006116    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:51.018662    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:51.018749    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:51.032204    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:51.032292    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:51.042627    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:51.042716    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:51.053574    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:51.053658    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:51.064699    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:51.064790    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:51.075106    5189 logs.go:276] 0 containers: []
	W0914 10:35:51.075116    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:51.075189    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:51.087583    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:51.087601    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:51.087606    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:51.110034    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:51.110042    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:51.121856    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:51.121867    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:51.155894    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:51.155905    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:51.169701    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:51.169713    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:51.206992    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:51.207003    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:51.230361    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:51.230371    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:51.269994    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:51.270007    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:51.281229    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:51.281240    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:51.295955    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:51.295964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:51.307705    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:51.307716    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:51.321391    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:51.321425    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:51.333207    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:51.333221    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:51.344293    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:51.344303    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:51.348738    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:51.348746    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:51.362500    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:51.362512    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:51.376883    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:51.376894    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:53.890840    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:54.025105    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:54.025289    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:54.043248    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:35:54.043335    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:54.054480    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:35:54.054564    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:54.065927    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:35:54.066014    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:54.077178    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:35:54.077262    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:54.087704    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:35:54.087785    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:54.098210    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:35:54.098294    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:54.108847    4633 logs.go:276] 0 containers: []
	W0914 10:35:54.108857    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:54.108927    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:54.119129    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:35:54.119147    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:35:54.119152    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:35:54.135450    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:35:54.135464    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:35:54.150429    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:35:54.150439    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:35:54.166382    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:35:54.166394    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:35:54.177918    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:54.177927    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:54.212976    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:35:54.212985    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:35:54.227039    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:35:54.227048    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:35:54.240942    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:35:54.240955    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:35:54.251996    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:54.252006    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:54.276772    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:54.276781    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:54.281266    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:54.281276    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:54.316810    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:35:54.316827    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:35:54.335404    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:35:54.335414    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:56.850168    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:58.891001    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:58.891131    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:58.902564    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:58.902653    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:58.913372    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:58.913461    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:58.932319    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:58.932405    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:58.942626    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:58.942708    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:58.952913    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:58.952996    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:58.964039    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:58.964125    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:58.974062    5189 logs.go:276] 0 containers: []
	W0914 10:35:58.974078    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:58.974154    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:58.984544    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:58.984562    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:58.984567    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:58.995895    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:58.995903    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:59.020335    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:59.020346    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:59.058767    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:59.058778    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:59.072519    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:59.072529    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:59.084227    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:59.084237    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:59.098653    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:59.098668    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:59.110247    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:59.110259    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:59.147092    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:59.147105    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:59.164236    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:59.164246    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:59.179808    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:59.179819    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:59.184068    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:59.184077    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:59.221818    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:59.221829    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:59.235908    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:59.235918    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:59.246903    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:59.246914    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:59.264191    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:59.264201    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:59.277584    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:59.277594    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:01.852293    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:01.852579    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:01.882325    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:01.882461    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:01.905569    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:01.905672    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:01.918755    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:36:01.918842    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:01.929680    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:01.929760    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:01.940208    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:01.940288    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:01.951156    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:01.951230    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:01.963210    4633 logs.go:276] 0 containers: []
	W0914 10:36:01.963222    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:01.963303    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:01.974034    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:01.974049    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:01.974054    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:02.009569    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:02.009576    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:02.027770    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:02.027785    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:02.039107    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:02.039117    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:02.051719    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:02.051729    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:02.063527    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:02.063538    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:02.088315    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:02.088326    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:02.092475    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:02.092481    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:02.132617    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:02.132629    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:02.146891    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:02.146904    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:02.158160    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:02.158171    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:02.173301    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:02.173316    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:02.190904    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:02.190915    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:01.792506    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:04.704251    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:06.794727    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:06.794882    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:06.806620    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:06.806709    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:06.818148    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:06.818248    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:06.830270    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:06.830361    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:06.840422    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:06.840514    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:06.850889    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:06.850969    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:06.861635    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:06.861714    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:06.872011    5189 logs.go:276] 0 containers: []
	W0914 10:36:06.872023    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:06.872099    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:06.886437    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:06.886454    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:06.886459    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:06.897843    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:06.897856    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:06.910136    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:06.910149    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:06.948970    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:06.948978    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:06.991054    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:06.991068    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:07.005090    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:07.005103    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:07.018786    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:07.018796    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:07.031301    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:07.031312    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:07.050991    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:07.051003    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:07.063147    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:07.063163    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:07.106196    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:07.106210    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:07.120842    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:07.120853    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:07.141588    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:07.141600    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:07.155901    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:07.155916    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:07.179723    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:07.179735    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:07.183884    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:07.183890    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:07.198438    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:07.198452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:09.713979    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:09.706509    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:09.706991    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:09.749011    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:09.749170    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:09.768844    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:09.768950    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:09.784209    4633 logs.go:276] 2 containers: [bb0d72a796ab a39016b44acb]
	I0914 10:36:09.784303    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:09.796061    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:09.796149    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:09.806370    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:09.806456    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:09.816823    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:09.816904    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:09.827876    4633 logs.go:276] 0 containers: []
	W0914 10:36:09.827887    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:09.827958    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:09.838525    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:09.838541    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:09.838548    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:09.843428    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:09.843434    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:09.857532    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:09.857542    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:09.869062    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:09.869077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:09.883951    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:09.883961    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:09.903650    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:09.903662    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:09.915190    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:09.915204    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:09.926821    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:09.926834    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:09.960645    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:09.960658    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:09.995725    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:09.995739    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:10.009508    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:10.009522    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:10.020576    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:10.020589    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:10.032028    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:10.032043    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:14.716216    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:14.716392    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:14.728374    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:14.728473    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:14.742979    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:14.743071    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:14.753670    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:14.753758    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:14.765104    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:14.765192    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:14.777768    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:14.777848    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:14.790162    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:14.790247    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:14.801287    5189 logs.go:276] 0 containers: []
	W0914 10:36:14.801298    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:14.801370    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:14.812233    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:14.812251    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:14.812256    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:14.852709    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:14.852723    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:14.867785    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:14.867797    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:14.885694    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:14.885709    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:14.897365    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:14.897381    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:14.931784    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:14.931796    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:14.945878    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:14.945891    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:14.957866    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:14.957877    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:14.971825    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:14.971838    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:15.008564    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:15.008572    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:15.012392    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:15.012399    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:15.026675    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:15.026686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:15.038605    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:15.038615    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:15.049408    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:15.049418    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:15.063427    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:15.063437    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:15.078950    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:15.078959    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:15.102805    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:15.102813    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:12.557226    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:17.616176    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:17.559445    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:17.559632    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:17.575282    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:17.575384    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:17.587070    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:17.587145    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:17.598078    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:17.598148    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:17.609369    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:17.609458    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:17.620757    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:17.620838    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:17.630531    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:17.630613    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:17.641492    4633 logs.go:276] 0 containers: []
	W0914 10:36:17.641504    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:17.641578    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:17.651928    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:17.651944    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:17.651950    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:17.666902    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:17.666911    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:17.684047    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:17.684057    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:17.695703    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:17.695713    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:17.717094    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:17.717103    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:17.736321    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:17.736330    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:17.760350    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:17.760362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:17.795248    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:17.795256    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:17.832278    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:17.832289    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:17.843355    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:17.843369    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:17.855446    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:17.855460    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:17.860124    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:17.860130    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:17.875516    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:17.875525    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:17.889315    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:17.889325    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:17.901047    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:17.901058    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:20.414839    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:22.617383    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:22.617765    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:22.645275    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:22.645428    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:22.663669    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:22.663768    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:22.677130    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:22.677217    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:22.688603    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:22.688684    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:22.699678    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:22.699768    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:22.710357    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:22.710438    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:22.720256    5189 logs.go:276] 0 containers: []
	W0914 10:36:22.720267    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:22.720335    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:22.731427    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:22.731446    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:22.731452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:22.771618    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:22.771627    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:22.787180    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:22.787191    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:22.808259    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:22.808271    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:22.821575    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:22.821587    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:22.825981    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:22.825993    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:22.860820    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:22.860835    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:22.872675    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:22.872686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:22.887646    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:22.887656    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:22.899477    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:22.899490    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:22.922174    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:22.922181    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:22.933907    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:22.933918    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:22.972080    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:22.972088    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:22.989846    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:22.989858    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:23.001725    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:23.001735    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:23.015748    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:23.015761    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:23.034117    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:23.034126    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:25.547506    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:25.417004    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:25.417294    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:25.445883    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:25.446021    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:25.463759    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:25.463867    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:25.477532    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:25.477629    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:25.489332    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:25.489417    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:25.499954    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:25.500037    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:25.510377    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:25.510463    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:25.521275    4633 logs.go:276] 0 containers: []
	W0914 10:36:25.521285    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:25.521350    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:25.531987    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:25.532003    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:25.532008    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:25.547245    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:25.547256    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:25.559305    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:25.559315    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:25.570918    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:25.570929    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:25.594354    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:25.594362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:25.627759    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:25.627774    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:25.639539    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:25.639550    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:25.651615    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:25.651630    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:25.663628    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:25.663638    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:25.698198    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:25.698208    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:25.702893    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:25.702900    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:25.717982    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:25.717992    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:25.729646    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:25.729658    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:25.747372    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:25.747385    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:25.759078    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:25.759095    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:30.549515    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:30.549754    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:30.568909    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:30.569025    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:30.584547    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:30.584678    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:30.596862    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:30.596949    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:30.607994    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:30.608077    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:30.618711    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:30.618794    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:30.629540    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:30.629620    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:30.640146    5189 logs.go:276] 0 containers: []
	W0914 10:36:30.640161    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:30.640233    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:30.651243    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:30.651262    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:30.651267    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:30.655961    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:30.655970    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:30.670450    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:30.670460    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:30.685861    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:30.685873    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:30.697879    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:30.697890    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:30.709723    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:30.709732    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:30.748260    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:30.748275    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:30.785427    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:30.785439    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:30.809345    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:30.809355    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:30.825075    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:30.825087    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:30.838735    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:30.838751    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:30.853032    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:30.853045    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:30.864583    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:30.864597    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:30.881421    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:30.881432    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:30.893137    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:30.893148    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:30.904818    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:30.904832    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:30.920753    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:30.920762    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:28.279024    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:33.461370    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:33.281123    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:33.281364    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:33.300840    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:33.300955    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:33.318694    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:33.318786    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:33.330915    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:33.331001    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:33.341622    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:33.341706    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:33.352468    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:33.352553    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:33.363203    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:33.363276    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:33.373425    4633 logs.go:276] 0 containers: []
	W0914 10:36:33.373440    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:33.373513    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:33.388373    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:33.388392    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:33.388398    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:33.402311    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:33.402326    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:33.414117    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:33.414128    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:33.426340    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:33.426350    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:33.437779    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:33.437792    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:33.473118    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:33.473128    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:33.507906    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:33.507922    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:33.519242    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:33.519253    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:33.531962    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:33.531974    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:33.551249    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:33.551259    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:33.555865    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:33.555872    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:33.570315    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:33.570328    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:33.582438    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:33.582448    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:33.599870    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:33.599879    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:33.611593    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:33.611605    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:36.136839    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:38.463388    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:38.463752    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:38.496583    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:38.496713    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:38.512714    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:38.512811    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:38.525531    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:38.525616    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:38.536554    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:38.536629    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:38.546810    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:38.546896    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:38.558208    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:38.558293    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:38.568428    5189 logs.go:276] 0 containers: []
	W0914 10:36:38.568441    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:38.568512    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:38.579201    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:38.579218    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:38.579224    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:38.584095    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:38.584103    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:38.604249    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:38.604259    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:38.615862    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:38.615873    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:38.627914    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:38.627925    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:38.641922    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:38.641932    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:38.657641    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:38.657651    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:38.696133    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:38.696144    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:38.710851    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:38.710861    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:38.722112    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:38.722123    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:38.735768    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:38.735781    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:38.759835    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:38.759846    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:38.771980    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:38.771993    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:38.784263    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:38.784274    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:38.821958    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:38.821971    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:38.855550    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:38.855562    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:38.872746    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:38.872755    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:41.139376    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:41.139901    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:41.183925    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:41.184086    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:41.203443    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:41.203552    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:41.217879    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:41.217972    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:41.229943    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:41.230033    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:41.242029    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:41.242112    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:41.256542    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:41.256624    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:41.267652    4633 logs.go:276] 0 containers: []
	W0914 10:36:41.267664    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:41.267734    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:41.277596    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:41.277612    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:41.277617    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:41.292094    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:41.292108    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:41.303894    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:41.303903    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:41.315365    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:41.315378    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:41.326851    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:41.326861    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:41.338227    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:41.338240    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:41.351001    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:41.351011    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:41.365449    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:41.365461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:41.378638    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:41.378651    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:41.404153    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:41.404163    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:41.408769    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:41.408776    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:41.461188    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:41.461199    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:41.475669    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:41.475679    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:41.487206    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:41.487216    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:41.509345    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:41.509355    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:41.385669    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:44.045123    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:46.385741    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:46.385953    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:46.403653    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:46.403760    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:46.420963    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:46.421055    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:46.431555    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:46.431639    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:46.441900    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:46.441983    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:46.452238    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:46.452315    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:46.462770    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:46.462869    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:46.472883    5189 logs.go:276] 0 containers: []
	W0914 10:36:46.472895    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:46.472965    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:46.483528    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:46.483546    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:46.483551    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:46.498073    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:46.498083    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:46.502258    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:46.502268    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:46.537497    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:46.537508    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:46.561059    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:46.561067    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:46.572473    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:46.572482    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:46.588798    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:46.588808    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:46.600584    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:46.600595    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:46.615377    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:46.615387    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:46.626604    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:46.626614    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:46.637504    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:46.637514    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:46.662334    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:46.662347    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:46.674579    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:46.674591    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:46.714215    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:46.714226    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:46.757224    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:46.757240    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:46.771107    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:46.771118    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:46.787026    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:46.787037    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:49.300732    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:49.047631    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:49.047953    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:49.073480    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:49.073608    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:49.091323    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:49.091427    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:49.104363    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:49.104444    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:49.115013    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:49.115102    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:49.125688    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:49.125761    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:49.136656    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:49.136739    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:49.147009    4633 logs.go:276] 0 containers: []
	W0914 10:36:49.147024    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:49.147106    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:49.157509    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:49.157527    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:49.157533    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:49.193060    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:49.193071    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:49.207656    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:49.207670    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:49.218995    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:49.219005    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:49.230835    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:49.230847    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:49.242482    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:49.242492    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:49.259148    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:49.259158    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:49.270551    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:49.270561    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:49.282352    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:49.282362    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:49.287016    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:49.287022    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:49.320500    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:49.320514    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:49.332149    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:49.332165    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:49.349278    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:49.349288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:49.364183    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:49.364192    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:49.376147    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:49.376157    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:51.901495    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:54.302837    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:54.303176    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:54.328047    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:54.328141    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:54.345338    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:54.345422    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:54.358009    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:54.358084    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:54.370403    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:54.370485    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:54.380859    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:54.380939    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:54.392232    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:54.392306    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:54.402621    5189 logs.go:276] 0 containers: []
	W0914 10:36:54.402633    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:54.402705    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:54.413193    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:54.413210    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:54.413216    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:54.427062    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:54.427076    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:54.439304    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:54.439317    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:54.480085    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:54.480098    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:54.484617    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:54.484625    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:54.520002    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:54.520013    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:54.534602    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:54.534617    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:54.545868    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:54.545880    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:54.568114    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:54.568128    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:54.582040    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:54.582055    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:54.604771    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:54.604778    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:54.616214    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:54.616228    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:54.654766    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:54.654776    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:54.670566    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:54.670575    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:54.685105    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:54.685116    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:54.702947    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:54.702960    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:54.718956    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:54.718964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:56.901796    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:56.902005    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:56.916821    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:36:56.916926    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:56.928819    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:36:56.928910    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:56.939542    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:36:56.939623    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:56.949503    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:36:56.949577    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:56.963179    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:36:56.963252    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:56.974139    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:36:56.974214    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:56.984350    4633 logs.go:276] 0 containers: []
	W0914 10:36:56.984361    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:56.984440    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:56.995323    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:36:56.995340    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:56.995345    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:57.000183    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:36:57.000190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:36:57.012955    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:36:57.012965    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:36:57.028121    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:36:57.028131    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:36:57.039995    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:36:57.040006    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:57.051753    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:36:57.051763    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:36:57.069067    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:36:57.069077    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:36:57.080515    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:57.080526    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:57.115267    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:57.115276    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:57.149795    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:36:57.149805    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:36:57.164105    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:36:57.164114    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:36:57.175582    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:36:57.175593    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:36:57.187414    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:36:57.187425    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:36:57.199276    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:36:57.199288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:36:57.213909    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:57.213923    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:57.230536    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:59.740549    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:02.230637    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:02.230981    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:02.259281    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:02.259428    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:02.277421    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:02.277524    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:02.290901    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:02.290996    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:02.302773    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:02.302859    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:02.313042    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:02.313129    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:02.323367    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:02.323449    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:02.337234    5189 logs.go:276] 0 containers: []
	W0914 10:37:02.337249    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:02.337318    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:02.347751    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:02.347769    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:02.347774    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:02.362799    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:02.362810    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:02.374569    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:02.374582    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:02.396270    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:02.396277    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:02.400279    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:02.400293    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:02.419328    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:02.419339    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:02.439027    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:02.439036    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:02.451250    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:02.451260    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:02.489194    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:02.489206    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:02.500806    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:02.500816    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:02.513340    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:02.513350    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:02.526028    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:02.526039    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:02.564407    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:02.564418    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:02.581537    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:02.581547    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:02.598566    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:02.598576    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:02.635883    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:02.635891    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:02.647905    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:02.647914    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:05.163512    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:04.742622    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:04.742742    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:04.763576    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:04.763666    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:04.779044    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:04.779132    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:04.790489    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:04.790571    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:04.802755    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:04.802829    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:04.815331    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:04.815413    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:04.826758    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:04.826850    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:04.837386    4633 logs.go:276] 0 containers: []
	W0914 10:37:04.837398    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:04.837471    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:04.848244    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:04.848261    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:04.848267    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:04.860229    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:04.860240    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:04.898953    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:04.898964    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:04.910675    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:04.910686    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:04.930715    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:04.930724    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:04.948977    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:04.948987    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:04.961013    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:04.961026    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:04.995686    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:04.995698    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:05.007945    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:05.007958    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:05.013187    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:05.013195    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:05.027513    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:05.027523    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:05.039464    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:05.039473    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:05.057810    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:05.057824    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:05.069275    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:05.069285    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:05.080566    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:05.080576    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:10.165456    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:10.165775    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:10.194243    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:10.194392    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:10.212445    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:10.212565    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:10.226089    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:10.226185    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:10.237547    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:10.237639    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:10.247999    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:10.248083    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:10.259325    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:10.259409    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:10.269170    5189 logs.go:276] 0 containers: []
	W0914 10:37:10.269182    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:10.269246    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:10.279839    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:10.279855    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:10.279860    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:10.291485    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:10.291497    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:10.315199    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:10.315210    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:10.353491    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:10.353501    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:10.357672    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:10.357678    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:10.395593    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:10.395603    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:10.410044    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:10.410056    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:10.421734    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:10.421745    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:10.456356    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:10.456368    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:10.468145    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:10.468155    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:10.482204    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:10.482217    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:10.493387    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:10.493397    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:10.510820    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:10.510832    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:10.524731    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:10.524742    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:10.538826    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:10.538835    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:10.552882    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:10.552892    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:10.564058    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:10.564069    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:07.607824    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:13.084156    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:12.609955    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:12.610254    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:12.632940    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:12.633080    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:12.648521    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:12.648606    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:12.662434    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:12.662527    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:12.673466    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:12.673546    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:12.683507    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:12.683591    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:12.693676    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:12.693754    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:12.703940    4633 logs.go:276] 0 containers: []
	W0914 10:37:12.703951    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:12.704020    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:12.714319    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:12.714337    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:12.714343    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:12.727375    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:12.727388    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:12.732176    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:12.732182    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:12.746450    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:12.746461    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:12.759943    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:12.759955    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:12.772564    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:12.772576    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:12.784418    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:12.784433    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:12.796050    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:12.796061    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:12.807583    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:12.807595    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:12.843507    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:12.843522    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:12.855727    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:12.855740    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:12.891109    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:12.891121    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:12.907885    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:12.907898    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:12.923135    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:12.923148    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:12.940543    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:12.940555    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:15.467716    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:18.086308    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:18.086803    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:18.118918    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:18.119078    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:18.136655    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:18.136755    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:18.149653    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:18.149750    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:18.161903    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:18.161990    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:18.172902    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:18.172991    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:18.183681    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:18.183769    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:18.194058    5189 logs.go:276] 0 containers: []
	W0914 10:37:18.194073    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:18.194146    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:18.206191    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:18.206212    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:18.206218    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:18.220732    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:18.220742    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:18.237076    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:18.237092    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:18.260598    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:18.260608    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:18.282529    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:18.282538    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:18.296061    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:18.296070    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:18.313934    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:18.313943    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:18.325155    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:18.325167    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:18.336620    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:18.336629    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:18.340693    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:18.340702    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:18.355305    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:18.355317    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:18.366670    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:18.366686    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:18.379142    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:18.379155    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:18.413141    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:18.413157    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:18.427046    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:18.427059    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:18.442198    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:18.442213    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:18.478983    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:18.478990    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:20.470192    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:20.470440    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:20.494836    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:20.494987    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:20.513583    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:20.513676    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:20.533917    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:20.534012    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:20.544775    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:20.544861    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:20.558173    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:20.558255    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:20.569401    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:20.569480    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:20.579938    4633 logs.go:276] 0 containers: []
	W0914 10:37:20.579953    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:20.580017    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:20.590817    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:20.590835    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:20.590840    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:20.604245    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:20.604259    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:20.609130    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:20.609137    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:20.622769    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:20.622778    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:20.640342    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:20.640353    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:20.676427    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:20.676437    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:20.691393    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:20.691405    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:20.716890    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:20.716898    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:20.728777    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:20.728788    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:20.765987    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:20.766000    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:20.778351    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:20.778361    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:20.794352    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:20.794363    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:20.812278    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:20.812288    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:20.824339    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:20.824350    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:20.836034    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:20.836046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:21.019585    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:23.352206    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:26.021608    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:26.021821    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:26.034407    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:26.034504    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:26.050207    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:26.050296    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:26.060704    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:26.060792    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:26.070843    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:26.070926    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:26.082490    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:26.082571    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:26.094046    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:26.094127    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:26.103995    5189 logs.go:276] 0 containers: []
	W0914 10:37:26.104007    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:26.104081    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:26.114767    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:26.114790    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:26.114796    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:26.119719    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:26.119726    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:26.134425    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:26.134437    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:26.170490    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:26.170504    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:26.185422    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:26.185433    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:26.197350    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:26.197361    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:26.211420    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:26.211432    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:26.223320    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:26.223332    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:26.246865    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:26.246876    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:26.260631    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:26.260640    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:26.299601    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:26.299610    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:26.313307    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:26.313319    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:26.330834    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:26.330844    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:26.345174    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:26.345187    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:26.361106    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:26.361120    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:26.372319    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:26.372331    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:26.411054    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:26.411067    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:28.927738    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:28.354322    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:28.354801    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:28.386430    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:28.386584    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:28.406049    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:28.406173    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:28.420700    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:28.420792    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:28.432603    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:28.432680    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:28.443377    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:28.443467    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:28.460733    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:28.460817    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:28.471417    4633 logs.go:276] 0 containers: []
	W0914 10:37:28.471432    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:28.471508    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:28.483643    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:28.483661    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:28.483668    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:28.495493    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:28.495504    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:28.512018    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:28.512027    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:28.524135    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:28.524144    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:28.536958    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:28.536969    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:28.551385    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:28.551395    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:28.563452    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:28.563462    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:28.582648    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:28.582658    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:28.597972    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:28.597982    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:28.602401    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:28.602411    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:28.636954    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:28.636966    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:28.648994    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:28.649006    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:28.665356    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:28.665366    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:28.683749    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:28.683758    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:28.710200    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:28.710213    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:31.247312    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:33.929813    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:33.929934    5189 kubeadm.go:597] duration metric: took 4m3.949047333s to restartPrimaryControlPlane
	W0914 10:37:33.930006    5189 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 10:37:33.930042    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 10:37:34.993706    5189 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063695708s)
	I0914 10:37:34.993782    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 10:37:34.998882    5189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:37:35.002129    5189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:37:35.005026    5189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:37:35.005032    5189 kubeadm.go:157] found existing configuration files:
	
	I0914 10:37:35.005059    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf
	I0914 10:37:35.007658    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:37:35.007681    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:37:35.010952    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf
	I0914 10:37:35.014234    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:37:35.014259    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:37:35.016974    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf
	I0914 10:37:35.019346    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:37:35.019375    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:37:35.022726    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf
	I0914 10:37:35.025815    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:37:35.025844    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:37:35.028343    5189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 10:37:35.046524    5189 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 10:37:35.046556    5189 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 10:37:35.097232    5189 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 10:37:35.097295    5189 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 10:37:35.097347    5189 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 10:37:35.147467    5189 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 10:37:35.151591    5189 out.go:235]   - Generating certificates and keys ...
	I0914 10:37:35.151626    5189 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 10:37:35.151656    5189 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 10:37:35.151708    5189 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 10:37:35.151753    5189 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 10:37:35.151795    5189 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 10:37:35.151822    5189 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 10:37:35.151862    5189 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 10:37:35.151898    5189 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 10:37:35.151938    5189 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 10:37:35.151978    5189 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 10:37:35.151999    5189 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 10:37:35.152033    5189 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 10:37:35.335388    5189 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 10:37:35.387549    5189 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 10:37:35.479649    5189 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 10:37:35.740483    5189 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 10:37:35.769660    5189 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 10:37:35.770050    5189 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 10:37:35.770075    5189 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 10:37:35.853905    5189 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 10:37:35.862138    5189 out.go:235]   - Booting up control plane ...
	I0914 10:37:35.862194    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 10:37:35.862232    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 10:37:35.862269    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 10:37:35.862316    5189 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 10:37:35.862402    5189 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 10:37:36.249490    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:36.249619    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:36.260686    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:36.260768    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:36.271670    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:36.271764    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:36.282929    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:36.283009    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:36.293810    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:36.293895    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:36.304473    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:36.304550    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:36.316237    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:36.316327    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:36.328195    4633 logs.go:276] 0 containers: []
	W0914 10:37:36.328208    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:36.328285    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:36.339640    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:36.339661    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:36.339667    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:36.376574    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:36.376593    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:36.381386    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:36.381392    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:36.416057    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:36.416067    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:36.431024    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:36.431043    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:36.444473    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:36.444484    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:36.457014    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:36.457025    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:36.472426    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:36.472439    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:36.490566    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:36.490582    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:36.504952    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:36.504963    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:36.519202    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:36.519213    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:36.530882    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:36.530898    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:36.542593    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:36.542603    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:36.565828    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:36.565835    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:36.577765    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:36.577775    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:40.856619    5189 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001897 seconds
	I0914 10:37:40.856720    5189 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 10:37:40.860298    5189 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 10:37:41.373171    5189 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 10:37:41.373593    5189 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-130000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 10:37:41.878294    5189 kubeadm.go:310] [bootstrap-token] Using token: r1mbrg.cr7msc60nic2b0om
	I0914 10:37:41.882077    5189 out.go:235]   - Configuring RBAC rules ...
	I0914 10:37:41.882137    5189 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 10:37:41.883939    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 10:37:41.889925    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 10:37:41.890802    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 10:37:41.891530    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 10:37:41.892460    5189 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 10:37:41.895579    5189 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 10:37:42.052341    5189 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 10:37:42.286430    5189 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 10:37:42.287041    5189 kubeadm.go:310] 
	I0914 10:37:42.287079    5189 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 10:37:42.287099    5189 kubeadm.go:310] 
	I0914 10:37:42.287150    5189 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 10:37:42.287157    5189 kubeadm.go:310] 
	I0914 10:37:42.287176    5189 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 10:37:42.287210    5189 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 10:37:42.287240    5189 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 10:37:42.287242    5189 kubeadm.go:310] 
	I0914 10:37:42.287290    5189 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 10:37:42.287294    5189 kubeadm.go:310] 
	I0914 10:37:42.287318    5189 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 10:37:42.287322    5189 kubeadm.go:310] 
	I0914 10:37:42.287349    5189 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 10:37:42.287387    5189 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 10:37:42.287424    5189 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 10:37:42.287429    5189 kubeadm.go:310] 
	I0914 10:37:42.287478    5189 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 10:37:42.287518    5189 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 10:37:42.287523    5189 kubeadm.go:310] 
	I0914 10:37:42.287565    5189 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r1mbrg.cr7msc60nic2b0om \
	I0914 10:37:42.287618    5189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 \
	I0914 10:37:42.287634    5189 kubeadm.go:310] 	--control-plane 
	I0914 10:37:42.287637    5189 kubeadm.go:310] 
	I0914 10:37:42.287675    5189 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 10:37:42.287678    5189 kubeadm.go:310] 
	I0914 10:37:42.287721    5189 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r1mbrg.cr7msc60nic2b0om \
	I0914 10:37:42.287775    5189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 
	I0914 10:37:42.288014    5189 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 10:37:42.288093    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:37:42.288102    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:37:42.291776    5189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 10:37:39.091286    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:42.303125    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 10:37:42.306083    5189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 10:37:42.311107    5189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 10:37:42.311157    5189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 10:37:42.311171    5189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-130000 minikube.k8s.io/updated_at=2024_09_14T10_37_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=stopped-upgrade-130000 minikube.k8s.io/primary=true
	I0914 10:37:42.353545    5189 kubeadm.go:1113] duration metric: took 42.432292ms to wait for elevateKubeSystemPrivileges
	I0914 10:37:42.353563    5189 ops.go:34] apiserver oom_adj: -16
	I0914 10:37:42.353568    5189 kubeadm.go:394] duration metric: took 4m12.386462042s to StartCluster
	I0914 10:37:42.353578    5189 settings.go:142] acquiring lock: {Name:mk7db576f28fda26cf1d7d854618889d7d4f8a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:37:42.353666    5189 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:37:42.354068    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:37:42.354248    5189 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:37:42.354260    5189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 10:37:42.354298    5189 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-130000"
	I0914 10:37:42.354306    5189 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-130000"
	W0914 10:37:42.354310    5189 addons.go:243] addon storage-provisioner should already be in state true
	I0914 10:37:42.354326    5189 host.go:66] Checking if "stopped-upgrade-130000" exists ...
	I0914 10:37:42.354330    5189 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-130000"
	I0914 10:37:42.354340    5189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-130000"
	I0914 10:37:42.354343    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:37:42.355220    5189 kapi.go:59] client config for stopped-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:37:42.355348    5189 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-130000"
	W0914 10:37:42.355353    5189 addons.go:243] addon default-storageclass should already be in state true
	I0914 10:37:42.355359    5189 host.go:66] Checking if "stopped-upgrade-130000" exists ...
	I0914 10:37:42.357934    5189 out.go:177] * Verifying Kubernetes components...
	I0914 10:37:42.358300    5189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 10:37:42.362178    5189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 10:37:42.362184    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:37:42.364962    5189 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:37:42.368976    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:37:42.373028    5189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:37:42.373036    5189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 10:37:42.373043    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:37:42.454440    5189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:37:42.460286    5189 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:37:42.460341    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:37:42.464059    5189 api_server.go:72] duration metric: took 109.804708ms to wait for apiserver process to appear ...
	I0914 10:37:42.464066    5189 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:37:42.464074    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:42.471163    5189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:37:42.539163    5189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 10:37:42.864986    5189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 10:37:42.865000    5189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 10:37:44.093386    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:44.093818    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:44.128062    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:44.128230    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:44.146583    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:44.146699    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:44.161476    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:44.161579    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:44.180949    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:44.181036    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:44.191716    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:44.191802    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:44.202618    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:44.202704    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:44.213246    4633 logs.go:276] 0 containers: []
	W0914 10:37:44.213259    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:44.213338    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:44.224033    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:44.224056    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:44.224062    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:44.240199    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:44.240210    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:44.252259    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:44.252272    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:44.267928    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:44.267944    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:44.280275    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:44.280286    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:44.294478    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:44.294487    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:44.308677    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:44.308687    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:44.320535    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:44.320551    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:44.354849    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:44.354861    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:44.390520    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:44.390534    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:44.402180    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:44.402190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:44.413436    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:44.413450    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:44.432451    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:44.432460    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:44.455501    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:44.455510    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:44.466989    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:44.467004    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:46.973747    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:47.466003    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:47.466087    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:51.975962    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:51.976151    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:51.988271    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:51.988358    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:51.998445    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:51.998536    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:52.011761    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:52.011846    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:52.022680    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:52.022776    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:52.036226    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:52.036307    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:52.050852    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:52.050941    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:52.061651    4633 logs.go:276] 0 containers: []
	W0914 10:37:52.061665    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:52.061742    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:52.073550    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:52.073573    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:52.073578    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:52.078525    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:52.078532    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:52.116465    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:52.116476    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:52.134895    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:52.134915    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:52.147432    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:52.147445    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:52.183158    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:37:52.183175    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:37:52.201301    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:52.201316    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:52.212905    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:37:52.212915    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:37:52.232153    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:37:52.232164    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:37:52.244295    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:37:52.244308    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:37:52.259865    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:37:52.259875    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:37:52.276597    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:52.276608    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:52.299592    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:52.299603    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:52.311036    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:52.311046    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:52.323741    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:37:52.323752    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:37:52.466334    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:52.466362    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:54.841147    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:57.466489    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:57.466516    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:59.843200    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:59.843307    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:59.854604    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:37:59.854702    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:59.865540    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:37:59.865627    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:59.876673    4633 logs.go:276] 4 containers: [426f46946fcd 40433f7e0d05 bb0d72a796ab a39016b44acb]
	I0914 10:37:59.876765    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:59.887605    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:37:59.887686    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:59.898237    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:37:59.898321    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:59.914122    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:37:59.914203    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:59.928592    4633 logs.go:276] 0 containers: []
	W0914 10:37:59.928604    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:59.928672    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:59.939243    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:37:59.939260    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:37:59.939265    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:37:59.954080    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:37:59.954090    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:37:59.965676    4633 logs.go:123] Gathering logs for coredns [bb0d72a796ab] ...
	I0914 10:37:59.965685    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0d72a796ab"
	I0914 10:37:59.977104    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:37:59.977112    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:37:59.995451    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:37:59.995461    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:00.007037    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:00.007051    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:00.041124    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:38:00.041133    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	I0914 10:38:00.052889    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:00.052901    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:00.086779    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:38:00.086794    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:38:00.100954    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:38:00.100964    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:38:00.111938    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:00.111947    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:00.136806    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:00.136820    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:00.141292    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:38:00.141300    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:38:00.156877    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:38:00.156888    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:38:00.172408    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:38:00.172418    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:38:02.466772    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:02.466800    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:02.686192    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:07.467269    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:07.467323    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:07.688258    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:07.688441    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:38:07.703947    4633 logs.go:276] 1 containers: [04291acc9ea5]
	I0914 10:38:07.704044    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:38:07.715343    4633 logs.go:276] 1 containers: [849350e1760a]
	I0914 10:38:07.715434    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:38:07.726505    4633 logs.go:276] 5 containers: [15fe6196b690 bfd281589cff 426f46946fcd 40433f7e0d05 a39016b44acb]
	I0914 10:38:07.726583    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:38:07.738994    4633 logs.go:276] 1 containers: [453a3041b38a]
	I0914 10:38:07.739092    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:38:07.749888    4633 logs.go:276] 1 containers: [f73d335e1ea1]
	I0914 10:38:07.749973    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:38:07.760052    4633 logs.go:276] 1 containers: [0f8efd6fef5c]
	I0914 10:38:07.760133    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:38:07.770375    4633 logs.go:276] 0 containers: []
	W0914 10:38:07.770387    4633 logs.go:278] No container was found matching "kindnet"
	I0914 10:38:07.770460    4633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:38:07.784525    4633 logs.go:276] 1 containers: [7fc5fd563cda]
	I0914 10:38:07.784542    4633 logs.go:123] Gathering logs for etcd [849350e1760a] ...
	I0914 10:38:07.784547    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849350e1760a"
	I0914 10:38:07.798354    4633 logs.go:123] Gathering logs for coredns [426f46946fcd] ...
	I0914 10:38:07.798364    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 426f46946fcd"
	I0914 10:38:07.810428    4633 logs.go:123] Gathering logs for kube-scheduler [453a3041b38a] ...
	I0914 10:38:07.810440    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 453a3041b38a"
	I0914 10:38:07.826317    4633 logs.go:123] Gathering logs for kube-proxy [f73d335e1ea1] ...
	I0914 10:38:07.826326    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f73d335e1ea1"
	I0914 10:38:07.837983    4633 logs.go:123] Gathering logs for container status ...
	I0914 10:38:07.837993    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:07.858265    4633 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:07.858279    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:07.892942    4633 logs.go:123] Gathering logs for coredns [15fe6196b690] ...
	I0914 10:38:07.892957    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15fe6196b690"
	I0914 10:38:07.904177    4633 logs.go:123] Gathering logs for coredns [bfd281589cff] ...
	I0914 10:38:07.904190    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfd281589cff"
	I0914 10:38:07.915326    4633 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:07.915342    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:07.940298    4633 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:07.940308    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:07.944968    4633 logs.go:123] Gathering logs for kube-apiserver [04291acc9ea5] ...
	I0914 10:38:07.944975    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04291acc9ea5"
	I0914 10:38:07.960108    4633 logs.go:123] Gathering logs for coredns [40433f7e0d05] ...
	I0914 10:38:07.960121    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40433f7e0d05"
	I0914 10:38:07.971858    4633 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:07.971872    4633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:08.009661    4633 logs.go:123] Gathering logs for kube-controller-manager [0f8efd6fef5c] ...
	I0914 10:38:08.009676    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8efd6fef5c"
	I0914 10:38:08.030953    4633 logs.go:123] Gathering logs for storage-provisioner [7fc5fd563cda] ...
	I0914 10:38:08.030963    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc5fd563cda"
	I0914 10:38:08.042668    4633 logs.go:123] Gathering logs for coredns [a39016b44acb] ...
	I0914 10:38:08.042677    4633 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39016b44acb"
	W0914 10:38:08.053131    4633 logs.go:130] failed coredns [a39016b44acb]: command: /bin/bash -c "docker logs --tail 400 a39016b44acb" /bin/bash -c "docker logs --tail 400 a39016b44acb": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: a39016b44acb
	 output: 
	** stderr ** 
	Error: No such container: a39016b44acb
	
	** /stderr **
	I0914 10:38:10.554360    4633 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:12.467995    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:12.468056    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 10:38:12.864902    5189 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 10:38:12.868706    5189 out.go:177] * Enabled addons: storage-provisioner
	I0914 10:38:15.555675    4633 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:15.561447    4633 out.go:201] 
	W0914 10:38:15.565325    4633 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 10:38:15.565335    4633 out.go:270] * 
	W0914 10:38:15.566158    4633 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:38:15.577286    4633 out.go:201] 
	I0914 10:38:12.878532    5189 addons.go:510] duration metric: took 30.525558166s for enable addons: enabled=[storage-provisioner]
	I0914 10:38:17.469072    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:17.469168    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:22.471178    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:22.471229    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:27.473185    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:27.473235    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sat 2024-09-14 17:29:15 UTC, ends at Sat 2024-09-14 17:38:31 UTC. --
	Sep 14 17:38:07 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 17:38:12 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 17:38:16 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:16Z" level=error msg="ContainerStats resp: {0x400084f6c0 linux}"
	Sep 14 17:38:16 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:16Z" level=error msg="ContainerStats resp: {0x400091f580 linux}"
	Sep 14 17:38:17 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:17Z" level=error msg="ContainerStats resp: {0x40001a6bc0 linux}"
	Sep 14 17:38:17 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x4000919280 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x40001a7840 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x4000919c80 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x40003a1640 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x40003a1a40 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x40003a1e00 linux}"
	Sep 14 17:38:18 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:18Z" level=error msg="ContainerStats resp: {0x4000784700 linux}"
	Sep 14 17:38:22 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:22Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 17:38:27 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 17:38:28 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:28Z" level=error msg="ContainerStats resp: {0x400084e800 linux}"
	Sep 14 17:38:28 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:28Z" level=error msg="ContainerStats resp: {0x400084f400 linux}"
	Sep 14 17:38:29 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:29Z" level=error msg="ContainerStats resp: {0x400091e4c0 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x400091f580 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x4000918a80 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x400091fc80 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x40009197c0 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x40003a0d40 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x4000919d40 linux}"
	Sep 14 17:38:30 running-upgrade-158000 cri-dockerd[3092]: time="2024-09-14T17:38:30Z" level=error msg="ContainerStats resp: {0x40003a0040 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	15fe6196b6905       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   570b0dd893a66
	bfd281589cff8       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   6e6b2be5aab84
	426f46946fcd1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   570b0dd893a66
	40433f7e0d05d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6e6b2be5aab84
	f73d335e1ea1d       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   55757df538234
	7fc5fd563cda7       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   6f17bd3c5037a
	849350e1760ac       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   1f9ffb5944d9f
	0f8efd6fef5c7       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   2636f544eca9c
	04291acc9ea5e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   7804e28fb2787
	453a3041b38ac       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   03f9cec993283
	
	
	==> coredns [15fe6196b690] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:56439->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:53173->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:43973->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:48778->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:54886->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7949712227441529051.1096354021251550130. HINFO: read udp 10.244.0.2:35630->10.0.2.3:53: i/o timeout
	
	
	==> coredns [40433f7e0d05] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:44816->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:37975->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:37215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:51460->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:34945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:37938->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:42682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:41731->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:36197->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5902127919195250190.8275178332549404317. HINFO: read udp 10.244.0.3:47036->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [426f46946fcd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:40045->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:43836->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:48014->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:55206->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:44846->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:56994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:38082->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:41008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:43204->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5617257109269510776.7528697680347060674. HINFO: read udp 10.244.0.2:57822->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bfd281589cff] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:51523->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:42276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:36727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:55563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:45240->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:35637->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3850059161178123268.1959473438448790840. HINFO: read udp 10.244.0.3:56157->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-158000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-158000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=running-upgrade-158000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T10_34_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:34:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-158000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:38:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:34:14 +0000   Sat, 14 Sep 2024 17:34:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:34:14 +0000   Sat, 14 Sep 2024 17:34:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:34:14 +0000   Sat, 14 Sep 2024 17:34:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:34:14 +0000   Sat, 14 Sep 2024 17:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-158000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7bada6b18e54e64b6e8dbaf4ce4b737
	  System UUID:                e7bada6b18e54e64b6e8dbaf4ce4b737
	  Boot ID:                    37fbfba9-c1e9-4dbd-b6ac-bb29ebf973ce
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9dx7m                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m2s
	  kube-system                 coredns-6d4b75cb6d-g6vkt                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m2s
	  kube-system                 etcd-running-upgrade-158000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-158000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-158000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-hl5c5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-running-upgrade-158000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-158000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-158000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s   node-controller  Node running-upgrade-158000 event: Registered Node running-upgrade-158000 in Controller
	
	
	==> dmesg <==
	[  +1.943254] systemd-fstab-generator[829]: Ignoring "noauto" for root device
	[  +0.080597] systemd-fstab-generator[840]: Ignoring "noauto" for root device
	[  +0.075672] systemd-fstab-generator[851]: Ignoring "noauto" for root device
	[  +1.136262] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090003] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.076886] systemd-fstab-generator[1012]: Ignoring "noauto" for root device
	[  +2.544713] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[ +18.155518] systemd-fstab-generator[1988]: Ignoring "noauto" for root device
	[  +2.434857] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.191605] systemd-fstab-generator[2304]: Ignoring "noauto" for root device
	[  +0.093110] systemd-fstab-generator[2315]: Ignoring "noauto" for root device
	[  +0.097661] systemd-fstab-generator[2328]: Ignoring "noauto" for root device
	[  +2.661327] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.173840] systemd-fstab-generator[3049]: Ignoring "noauto" for root device
	[  +0.083750] systemd-fstab-generator[3060]: Ignoring "noauto" for root device
	[  +0.078854] systemd-fstab-generator[3071]: Ignoring "noauto" for root device
	[  +0.089909] systemd-fstab-generator[3085]: Ignoring "noauto" for root device
	[  +2.278002] systemd-fstab-generator[3239]: Ignoring "noauto" for root device
	[Sep14 17:30] systemd-fstab-generator[3612]: Ignoring "noauto" for root device
	[  +1.435129] systemd-fstab-generator[3908]: Ignoring "noauto" for root device
	[ +19.478445] kauditd_printk_skb: 68 callbacks suppressed
	[Sep14 17:34] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.682176] systemd-fstab-generator[11926]: Ignoring "noauto" for root device
	[  +5.139894] systemd-fstab-generator[12510]: Ignoring "noauto" for root device
	[  +0.472701] systemd-fstab-generator[12641]: Ignoring "noauto" for root device
	
	
	==> etcd [849350e1760a] <==
	{"level":"info","ts":"2024-09-14T17:34:10.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-14T17:34:10.584Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-14T17:34:10.615Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:34:10.615Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:34:10.615Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:34:10.615Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-14T17:34:10.615Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-158000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:34:10.969Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:34:10.970Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:34:10.974Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:38:31 up 9 min,  0 users,  load average: 0.34, 0.40, 0.24
	Linux running-upgrade-158000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [04291acc9ea5] <==
	I0914 17:34:12.206014       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0914 17:34:12.210155       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0914 17:34:12.211019       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0914 17:34:12.211202       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:34:12.211292       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 17:34:12.211302       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:34:12.219060       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0914 17:34:12.942963       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 17:34:13.118878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0914 17:34:13.128519       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0914 17:34:13.128750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 17:34:13.269274       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 17:34:13.282969       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:34:13.376026       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0914 17:34:13.377930       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0914 17:34:13.378377       1 controller.go:611] quota admission added evaluator for: endpoints
	I0914 17:34:13.379644       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:34:14.244353       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0914 17:34:14.563062       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0914 17:34:14.568033       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0914 17:34:14.572495       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0914 17:34:14.607616       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 17:34:29.006495       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0914 17:34:29.061000       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0914 17:34:29.571907       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0f8efd6fef5c] <==
	I0914 17:34:28.254621       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0914 17:34:28.254682       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0914 17:34:28.254542       1 shared_informer.go:262] Caches are synced for cronjob
	I0914 17:34:28.254847       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0914 17:34:28.254523       1 shared_informer.go:262] Caches are synced for deployment
	I0914 17:34:28.254816       1 event.go:294] "Event occurred" object="running-upgrade-158000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-158000 event: Registered Node running-upgrade-158000 in Controller"
	I0914 17:34:28.255447       1 shared_informer.go:262] Caches are synced for daemon sets
	I0914 17:34:28.256421       1 shared_informer.go:262] Caches are synced for ephemeral
	I0914 17:34:28.256445       1 shared_informer.go:262] Caches are synced for GC
	I0914 17:34:28.305361       1 shared_informer.go:262] Caches are synced for stateful set
	I0914 17:34:28.355887       1 shared_informer.go:262] Caches are synced for endpoint
	I0914 17:34:28.406003       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0914 17:34:28.406136       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0914 17:34:28.412151       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 17:34:28.465042       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 17:34:28.502264       1 shared_informer.go:262] Caches are synced for disruption
	I0914 17:34:28.502280       1 disruption.go:371] Sending events to api server.
	I0914 17:34:28.505343       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0914 17:34:28.874725       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 17:34:28.904683       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 17:34:28.904711       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 17:34:29.007746       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0914 17:34:29.063922       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hl5c5"
	I0914 17:34:29.258333       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9dx7m"
	I0914 17:34:29.267297       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-g6vkt"
	
	
	==> kube-proxy [f73d335e1ea1] <==
	I0914 17:34:29.559328       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0914 17:34:29.559352       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0914 17:34:29.559361       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0914 17:34:29.569976       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0914 17:34:29.569987       1 server_others.go:206] "Using iptables Proxier"
	I0914 17:34:29.570000       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0914 17:34:29.570173       1 server.go:661] "Version info" version="v1.24.1"
	I0914 17:34:29.570178       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:34:29.570398       1 config.go:444] "Starting node config controller"
	I0914 17:34:29.570408       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0914 17:34:29.570527       1 config.go:317] "Starting service config controller"
	I0914 17:34:29.570559       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0914 17:34:29.570582       1 config.go:226] "Starting endpoint slice config controller"
	I0914 17:34:29.570596       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0914 17:34:29.671879       1 shared_informer.go:262] Caches are synced for service config
	I0914 17:34:29.671903       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0914 17:34:29.672002       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [453a3041b38a] <==
	W0914 17:34:12.164523       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 17:34:12.164554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 17:34:12.164584       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 17:34:12.164591       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 17:34:12.164604       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 17:34:12.164614       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 17:34:12.164655       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 17:34:12.164662       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 17:34:12.164816       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 17:34:12.164849       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 17:34:12.165250       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 17:34:12.165280       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:34:13.081180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:34:13.081238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 17:34:13.103199       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 17:34:13.103286       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 17:34:13.132703       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 17:34:13.132750       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 17:34:13.137906       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 17:34:13.138069       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 17:34:13.174238       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 17:34:13.174366       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 17:34:13.201021       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 17:34:13.201104       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0914 17:34:13.461994       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sat 2024-09-14 17:29:15 UTC, ends at Sat 2024-09-14 17:38:32 UTC. --
	Sep 14 17:34:16 running-upgrade-158000 kubelet[12516]: I0914 17:34:16.792291   12516 request.go:601] Waited for 1.136612438s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 14 17:34:16 running-upgrade-158000 kubelet[12516]: E0914 17:34:16.796628   12516 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-158000\" already exists" pod="kube-system/etcd-running-upgrade-158000"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: I0914 17:34:28.261167   12516 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: I0914 17:34:28.267247   12516 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: I0914 17:34:28.267622   12516 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: I0914 17:34:28.368301   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d96c171-fcfb-4ae7-bd09-ccb977cc8e22-tmp\") pod \"storage-provisioner\" (UID: \"2d96c171-fcfb-4ae7-bd09-ccb977cc8e22\") " pod="kube-system/storage-provisioner"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: I0914 17:34:28.368326   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcgf\" (UniqueName: \"kubernetes.io/projected/2d96c171-fcfb-4ae7-bd09-ccb977cc8e22-kube-api-access-bpcgf\") pod \"storage-provisioner\" (UID: \"2d96c171-fcfb-4ae7-bd09-ccb977cc8e22\") " pod="kube-system/storage-provisioner"
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: E0914 17:34:28.472530   12516 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: E0914 17:34:28.472550   12516 projected.go:192] Error preparing data for projected volume kube-api-access-bpcgf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 14 17:34:28 running-upgrade-158000 kubelet[12516]: E0914 17:34:28.472583   12516 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2d96c171-fcfb-4ae7-bd09-ccb977cc8e22-kube-api-access-bpcgf podName:2d96c171-fcfb-4ae7-bd09-ccb977cc8e22 nodeName:}" failed. No retries permitted until 2024-09-14 17:34:28.972570835 +0000 UTC m=+14.422245863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bpcgf" (UniqueName: "kubernetes.io/projected/2d96c171-fcfb-4ae7-bd09-ccb977cc8e22-kube-api-access-bpcgf") pod "storage-provisioner" (UID: "2d96c171-fcfb-4ae7-bd09-ccb977cc8e22") : configmap "kube-root-ca.crt" not found
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.066875   12516 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.175584   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4612fd80-c352-4cd1-886e-cb065cfb606e-xtables-lock\") pod \"kube-proxy-hl5c5\" (UID: \"4612fd80-c352-4cd1-886e-cb065cfb606e\") " pod="kube-system/kube-proxy-hl5c5"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.175617   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4612fd80-c352-4cd1-886e-cb065cfb606e-kube-proxy\") pod \"kube-proxy-hl5c5\" (UID: \"4612fd80-c352-4cd1-886e-cb065cfb606e\") " pod="kube-system/kube-proxy-hl5c5"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.175637   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvn94\" (UniqueName: \"kubernetes.io/projected/4612fd80-c352-4cd1-886e-cb065cfb606e-kube-api-access-qvn94\") pod \"kube-proxy-hl5c5\" (UID: \"4612fd80-c352-4cd1-886e-cb065cfb606e\") " pod="kube-system/kube-proxy-hl5c5"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.175649   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4612fd80-c352-4cd1-886e-cb065cfb606e-lib-modules\") pod \"kube-proxy-hl5c5\" (UID: \"4612fd80-c352-4cd1-886e-cb065cfb606e\") " pod="kube-system/kube-proxy-hl5c5"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.262339   12516 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.272660   12516 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.276798   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvx8\" (UniqueName: \"kubernetes.io/projected/18d2ddc3-dadc-4170-9d1b-f2cc08d91ddb-kube-api-access-xxvx8\") pod \"coredns-6d4b75cb6d-g6vkt\" (UID: \"18d2ddc3-dadc-4170-9d1b-f2cc08d91ddb\") " pod="kube-system/coredns-6d4b75cb6d-g6vkt"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.276820   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5a373e3-c6d6-4c40-b5c9-9ca0f103751e-config-volume\") pod \"coredns-6d4b75cb6d-9dx7m\" (UID: \"d5a373e3-c6d6-4c40-b5c9-9ca0f103751e\") " pod="kube-system/coredns-6d4b75cb6d-9dx7m"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.276832   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq2p9\" (UniqueName: \"kubernetes.io/projected/d5a373e3-c6d6-4c40-b5c9-9ca0f103751e-kube-api-access-zq2p9\") pod \"coredns-6d4b75cb6d-9dx7m\" (UID: \"d5a373e3-c6d6-4c40-b5c9-9ca0f103751e\") " pod="kube-system/coredns-6d4b75cb6d-9dx7m"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.276852   12516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18d2ddc3-dadc-4170-9d1b-f2cc08d91ddb-config-volume\") pod \"coredns-6d4b75cb6d-g6vkt\" (UID: \"18d2ddc3-dadc-4170-9d1b-f2cc08d91ddb\") " pod="kube-system/coredns-6d4b75cb6d-g6vkt"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.792697   12516 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6e6b2be5aab84da0d5234556e75dd4d7aab3fc9923b85e7ebc2c97f4150f4ced"
	Sep 14 17:34:29 running-upgrade-158000 kubelet[12516]: I0914 17:34:29.794617   12516 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="570b0dd893a66eaca08a3115e30b554bbde3977f5aa958d07e5e751739237599"
	Sep 14 17:38:06 running-upgrade-158000 kubelet[12516]: I0914 17:38:06.972573   12516 scope.go:110] "RemoveContainer" containerID="bb0d72a796aba589d94e1a5449e502ed0e722e9a5052500f07708d7341dfc357"
	Sep 14 17:38:07 running-upgrade-158000 kubelet[12516]: I0914 17:38:07.982908   12516 scope.go:110] "RemoveContainer" containerID="a39016b44acb547b138009336e2e2024753ea3e02646977c6b903970422471af"
	
	
	==> storage-provisioner [7fc5fd563cda] <==
	I0914 17:34:29.362392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 17:34:29.372888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 17:34:29.372912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 17:34:29.377180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 17:34:29.378161       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-158000_075ab366-c0bd-40cd-b26b-9de784445a4d!
	I0914 17:34:29.378587       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c78fd57a-c89c-4a4c-932e-d4bfce272ce5", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-158000_075ab366-c0bd-40cd-b26b-9de784445a4d became leader
	I0914 17:34:29.481549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-158000_075ab366-c0bd-40cd-b26b-9de784445a4d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-158000 -n running-upgrade-158000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-158000 -n running-upgrade-158000: exit status 2 (15.728825625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-158000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-158000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-158000: (1.222876208s)
--- FAIL: TestRunningBinaryUpgrade (601.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.218591166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-804000" primary control-plane node in "kubernetes-upgrade-804000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-804000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:31:48.129708    5082 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:31:48.129860    5082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:31:48.129864    5082 out.go:358] Setting ErrFile to fd 2...
	I0914 10:31:48.129866    5082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:31:48.129991    5082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:31:48.131119    5082 out.go:352] Setting JSON to false
	I0914 10:31:48.147878    5082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3671,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:31:48.147951    5082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:31:48.153159    5082 out.go:177] * [kubernetes-upgrade-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:31:48.160997    5082 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:31:48.161029    5082 notify.go:220] Checking for updates...
	I0914 10:31:48.167998    5082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:31:48.170982    5082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:31:48.174018    5082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:31:48.177024    5082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:31:48.180015    5082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:31:48.183299    5082 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:31:48.183371    5082 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:31:48.183416    5082 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:31:48.187009    5082 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:31:48.193958    5082 start.go:297] selected driver: qemu2
	I0914 10:31:48.193964    5082 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:31:48.193970    5082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:31:48.196182    5082 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:31:48.198954    5082 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:31:48.202082    5082 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 10:31:48.202098    5082 cni.go:84] Creating CNI manager for ""
	I0914 10:31:48.202119    5082 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 10:31:48.202148    5082 start.go:340] cluster config:
	{Name:kubernetes-upgrade-804000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:31:48.205932    5082 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:31:48.213027    5082 out.go:177] * Starting "kubernetes-upgrade-804000" primary control-plane node in "kubernetes-upgrade-804000" cluster
	I0914 10:31:48.216862    5082 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 10:31:48.216884    5082 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 10:31:48.216897    5082 cache.go:56] Caching tarball of preloaded images
	I0914 10:31:48.216969    5082 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:31:48.216982    5082 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 10:31:48.217047    5082 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kubernetes-upgrade-804000/config.json ...
	I0914 10:31:48.217064    5082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kubernetes-upgrade-804000/config.json: {Name:mk90efe96030d78e0b329a1ed2bc2be48c54842c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:31:48.217418    5082 start.go:360] acquireMachinesLock for kubernetes-upgrade-804000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:31:48.217454    5082 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "kubernetes-upgrade-804000"
	I0914 10:31:48.217465    5082 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:31:48.217501    5082 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:31:48.225970    5082 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:31:48.243952    5082 start.go:159] libmachine.API.Create for "kubernetes-upgrade-804000" (driver="qemu2")
	I0914 10:31:48.243979    5082 client.go:168] LocalClient.Create starting
	I0914 10:31:48.244052    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:31:48.244090    5082 main.go:141] libmachine: Decoding PEM data...
	I0914 10:31:48.244098    5082 main.go:141] libmachine: Parsing certificate...
	I0914 10:31:48.244134    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:31:48.244163    5082 main.go:141] libmachine: Decoding PEM data...
	I0914 10:31:48.244170    5082 main.go:141] libmachine: Parsing certificate...
	I0914 10:31:48.244623    5082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:31:48.564395    5082 main.go:141] libmachine: Creating SSH key...
	I0914 10:31:48.751431    5082 main.go:141] libmachine: Creating Disk image...
	I0914 10:31:48.751439    5082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:31:48.751628    5082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:48.760949    5082 main.go:141] libmachine: STDOUT: 
	I0914 10:31:48.760969    5082 main.go:141] libmachine: STDERR: 
	I0914 10:31:48.761035    5082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2 +20000M
	I0914 10:31:48.768999    5082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:31:48.769014    5082 main.go:141] libmachine: STDERR: 
	I0914 10:31:48.769032    5082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:48.769040    5082 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:31:48.769052    5082 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:31:48.769081    5082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d3:ec:71:79:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:48.770655    5082 main.go:141] libmachine: STDOUT: 
	I0914 10:31:48.770671    5082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:31:48.770707    5082 client.go:171] duration metric: took 526.742791ms to LocalClient.Create
	I0914 10:31:50.772857    5082 start.go:128] duration metric: took 2.555430333s to createHost
	I0914 10:31:50.772936    5082 start.go:83] releasing machines lock for "kubernetes-upgrade-804000", held for 2.555579s
	W0914 10:31:50.773058    5082 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:31:50.786311    5082 out.go:177] * Deleting "kubernetes-upgrade-804000" in qemu2 ...
	W0914 10:31:50.817562    5082 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:31:50.817589    5082 start.go:729] Will try again in 5 seconds ...
	I0914 10:31:55.819618    5082 start.go:360] acquireMachinesLock for kubernetes-upgrade-804000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:31:55.820134    5082 start.go:364] duration metric: took 413.791µs to acquireMachinesLock for "kubernetes-upgrade-804000"
	I0914 10:31:55.820199    5082 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:31:55.820386    5082 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:31:55.826149    5082 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:31:55.872936    5082 start.go:159] libmachine.API.Create for "kubernetes-upgrade-804000" (driver="qemu2")
	I0914 10:31:55.872981    5082 client.go:168] LocalClient.Create starting
	I0914 10:31:55.873107    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:31:55.873192    5082 main.go:141] libmachine: Decoding PEM data...
	I0914 10:31:55.873208    5082 main.go:141] libmachine: Parsing certificate...
	I0914 10:31:55.873270    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:31:55.873318    5082 main.go:141] libmachine: Decoding PEM data...
	I0914 10:31:55.873333    5082 main.go:141] libmachine: Parsing certificate...
	I0914 10:31:55.873912    5082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:31:56.044526    5082 main.go:141] libmachine: Creating SSH key...
	I0914 10:31:56.256852    5082 main.go:141] libmachine: Creating Disk image...
	I0914 10:31:56.256868    5082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:31:56.257078    5082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:56.266593    5082 main.go:141] libmachine: STDOUT: 
	I0914 10:31:56.266616    5082 main.go:141] libmachine: STDERR: 
	I0914 10:31:56.266688    5082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2 +20000M
	I0914 10:31:56.274623    5082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:31:56.274638    5082 main.go:141] libmachine: STDERR: 
	I0914 10:31:56.274653    5082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:56.274664    5082 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:31:56.274672    5082 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:31:56.274716    5082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:cf:92:7f:83:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:31:56.276343    5082 main.go:141] libmachine: STDOUT: 
	I0914 10:31:56.276355    5082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:31:56.276372    5082 client.go:171] duration metric: took 403.403084ms to LocalClient.Create
	I0914 10:31:58.278419    5082 start.go:128] duration metric: took 2.458103833s to createHost
	I0914 10:31:58.278456    5082 start.go:83] releasing machines lock for "kubernetes-upgrade-804000", held for 2.458404208s
	W0914 10:31:58.278635    5082 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:31:58.283037    5082 out.go:201] 
	W0914 10:31:58.296949    5082 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:31:58.296979    5082 out.go:270] * 
	* 
	W0914 10:31:58.297851    5082 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:31:58.308971    5082 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-804000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-804000: (3.12845475s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-804000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-804000 status --format={{.Host}}: exit status 7 (55.370375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.190191375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-804000" primary control-plane node in "kubernetes-upgrade-804000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-804000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-804000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:32:01.535103    5124 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:32:01.535251    5124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:32:01.535254    5124 out.go:358] Setting ErrFile to fd 2...
	I0914 10:32:01.535257    5124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:32:01.535369    5124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:32:01.536336    5124 out.go:352] Setting JSON to false
	I0914 10:32:01.552723    5124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3684,"bootTime":1726331437,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:32:01.552784    5124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:32:01.557854    5124 out.go:177] * [kubernetes-upgrade-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:32:01.567025    5124 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:32:01.567075    5124 notify.go:220] Checking for updates...
	I0914 10:32:01.573939    5124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:32:01.577947    5124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:32:01.580994    5124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:32:01.583944    5124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:32:01.587008    5124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:32:01.590287    5124 config.go:182] Loaded profile config "kubernetes-upgrade-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 10:32:01.590546    5124 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:32:01.594935    5124 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:32:01.601941    5124 start.go:297] selected driver: qemu2
	I0914 10:32:01.601948    5124 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:32:01.602000    5124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:32:01.604381    5124 cni.go:84] Creating CNI manager for ""
	I0914 10:32:01.604426    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:32:01.604460    5124 start.go:340] cluster config:
	{Name:kubernetes-upgrade-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-804000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:32:01.607968    5124 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:32:01.613467    5124 out.go:177] * Starting "kubernetes-upgrade-804000" primary control-plane node in "kubernetes-upgrade-804000" cluster
	I0914 10:32:01.617939    5124 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:32:01.617960    5124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:32:01.617974    5124 cache.go:56] Caching tarball of preloaded images
	I0914 10:32:01.618043    5124 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:32:01.618047    5124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:32:01.618110    5124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kubernetes-upgrade-804000/config.json ...
	I0914 10:32:01.618486    5124 start.go:360] acquireMachinesLock for kubernetes-upgrade-804000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:32:01.618514    5124 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "kubernetes-upgrade-804000"
	I0914 10:32:01.618523    5124 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:32:01.618529    5124 fix.go:54] fixHost starting: 
	I0914 10:32:01.618644    5124 fix.go:112] recreateIfNeeded on kubernetes-upgrade-804000: state=Stopped err=<nil>
	W0914 10:32:01.618652    5124 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:32:01.622810    5124 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-804000" ...
	I0914 10:32:01.630916    5124 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:32:01.630953    5124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:cf:92:7f:83:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:32:01.632920    5124 main.go:141] libmachine: STDOUT: 
	I0914 10:32:01.632938    5124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:32:01.632970    5124 fix.go:56] duration metric: took 14.441375ms for fixHost
	I0914 10:32:01.632974    5124 start.go:83] releasing machines lock for "kubernetes-upgrade-804000", held for 14.456ms
	W0914 10:32:01.632979    5124 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:32:01.633020    5124 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:32:01.633025    5124 start.go:729] Will try again in 5 seconds ...
	I0914 10:32:06.635069    5124 start.go:360] acquireMachinesLock for kubernetes-upgrade-804000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:32:06.635617    5124 start.go:364] duration metric: took 425.875µs to acquireMachinesLock for "kubernetes-upgrade-804000"
	I0914 10:32:06.635771    5124 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:32:06.635791    5124 fix.go:54] fixHost starting: 
	I0914 10:32:06.636587    5124 fix.go:112] recreateIfNeeded on kubernetes-upgrade-804000: state=Stopped err=<nil>
	W0914 10:32:06.636615    5124 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:32:06.642291    5124 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-804000" ...
	I0914 10:32:06.649192    5124 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:32:06.649427    5124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:cf:92:7f:83:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubernetes-upgrade-804000/disk.qcow2
	I0914 10:32:06.659163    5124 main.go:141] libmachine: STDOUT: 
	I0914 10:32:06.659221    5124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:32:06.659356    5124 fix.go:56] duration metric: took 23.56475ms for fixHost
	I0914 10:32:06.659374    5124 start.go:83] releasing machines lock for "kubernetes-upgrade-804000", held for 23.733334ms
	W0914 10:32:06.659548    5124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-804000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-804000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:32:06.668105    5124 out.go:201] 
	W0914 10:32:06.671251    5124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:32:06.671280    5124 out.go:270] * 
	* 
	W0914 10:32:06.673692    5124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:32:06.682176    5124 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-804000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-804000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-804000 version --output=json: exit status 1 (65.83225ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-804000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-14 10:32:06.762964 -0700 PDT m=+2953.067593126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-804000 -n kubernetes-upgrade-804000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-804000 -n kubernetes-upgrade-804000: exit status 7 (33.297541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-804000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-804000
--- FAIL: TestKubernetesUpgrade (18.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19643
- KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2735762108/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19643
- KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2546689493/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3761611045 start -p stopped-upgrade-130000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3761611045 start -p stopped-upgrade-130000 --memory=2200 --vm-driver=qemu2 : (40.915390667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3761611045 -p stopped-upgrade-130000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3761611045 -p stopped-upgrade-130000 stop: (12.105503417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0914 10:34:39.744301    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:36:36.643320    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:36:47.832428    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.554678625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-130000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-130000" primary control-plane node in "stopped-upgrade-130000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-130000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:33:00.943276    5189 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:33:00.943414    5189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:33:00.943418    5189 out.go:358] Setting ErrFile to fd 2...
	I0914 10:33:00.943421    5189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:33:00.943539    5189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:33:00.944552    5189 out.go:352] Setting JSON to false
	I0914 10:33:00.961802    5189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3743,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:33:00.961876    5189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:33:00.966731    5189 out.go:177] * [stopped-upgrade-130000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:33:00.974881    5189 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:33:00.974949    5189 notify.go:220] Checking for updates...
	I0914 10:33:00.981815    5189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:33:00.984830    5189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:33:00.987901    5189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:33:00.990882    5189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:33:00.993839    5189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:33:00.997125    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:33:01.000756    5189 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 10:33:01.003889    5189 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:33:01.007832    5189 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:33:01.014798    5189 start.go:297] selected driver: qemu2
	I0914 10:33:01.014804    5189 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:01.014853    5189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:33:01.017465    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:33:01.017500    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:33:01.017530    5189 start.go:340] cluster config:
	{Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:01.017586    5189 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:33:01.024844    5189 out.go:177] * Starting "stopped-upgrade-130000" primary control-plane node in "stopped-upgrade-130000" cluster
	I0914 10:33:01.028648    5189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:33:01.028664    5189 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0914 10:33:01.028671    5189 cache.go:56] Caching tarball of preloaded images
	I0914 10:33:01.028722    5189 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:33:01.028727    5189 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0914 10:33:01.028788    5189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/config.json ...
	I0914 10:33:01.029295    5189 start.go:360] acquireMachinesLock for stopped-upgrade-130000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:33:01.029327    5189 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "stopped-upgrade-130000"
	I0914 10:33:01.029335    5189 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:33:01.029341    5189 fix.go:54] fixHost starting: 
	I0914 10:33:01.029449    5189 fix.go:112] recreateIfNeeded on stopped-upgrade-130000: state=Stopped err=<nil>
	W0914 10:33:01.029457    5189 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:33:01.037768    5189 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-130000" ...
	I0914 10:33:01.041745    5189 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:33:01.041818    5189 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50483-:22,hostfwd=tcp::50484-:2376,hostname=stopped-upgrade-130000 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/disk.qcow2
	I0914 10:33:01.089823    5189 main.go:141] libmachine: STDOUT: 
	I0914 10:33:01.089848    5189 main.go:141] libmachine: STDERR: 
	I0914 10:33:01.089854    5189 main.go:141] libmachine: Waiting for VM to start (ssh -p 50483 docker@127.0.0.1)...
	I0914 10:33:21.130429    5189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/config.json ...
	I0914 10:33:21.131019    5189 machine.go:93] provisionDockerMachine start ...
	I0914 10:33:21.131178    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.131558    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.131571    5189 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 10:33:21.221193    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 10:33:21.221222    5189 buildroot.go:166] provisioning hostname "stopped-upgrade-130000"
	I0914 10:33:21.221329    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.221569    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.221582    5189 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-130000 && echo "stopped-upgrade-130000" | sudo tee /etc/hostname
	I0914 10:33:21.306522    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-130000
	
	I0914 10:33:21.306603    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.306762    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.306776    5189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-130000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-130000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-130000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 10:33:21.377499    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 10:33:21.377510    5189 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19643-1079/.minikube CaCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19643-1079/.minikube}
	I0914 10:33:21.377526    5189 buildroot.go:174] setting up certificates
	I0914 10:33:21.377534    5189 provision.go:84] configureAuth start
	I0914 10:33:21.377541    5189 provision.go:143] copyHostCerts
	I0914 10:33:21.377612    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem, removing ...
	I0914 10:33:21.377623    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem
	I0914 10:33:21.377744    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/key.pem (1675 bytes)
	I0914 10:33:21.377928    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem, removing ...
	I0914 10:33:21.377931    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem
	I0914 10:33:21.377989    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.pem (1078 bytes)
	I0914 10:33:21.378101    5189 exec_runner.go:144] found /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem, removing ...
	I0914 10:33:21.378104    5189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem
	I0914 10:33:21.378157    5189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19643-1079/.minikube/cert.pem (1123 bytes)
	I0914 10:33:21.378245    5189 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-130000 san=[127.0.0.1 localhost minikube stopped-upgrade-130000]
	I0914 10:33:21.439185    5189 provision.go:177] copyRemoteCerts
	I0914 10:33:21.439228    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 10:33:21.439237    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:21.476387    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 10:33:21.483123    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 10:33:21.490003    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 10:33:21.497596    5189 provision.go:87] duration metric: took 120.053458ms to configureAuth
	I0914 10:33:21.497610    5189 buildroot.go:189] setting minikube options for container-runtime
	I0914 10:33:21.497736    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:33:21.497772    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.497861    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.497868    5189 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 10:33:21.564587    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 10:33:21.564601    5189 buildroot.go:70] root file system type: tmpfs
	I0914 10:33:21.564651    5189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 10:33:21.564719    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.564833    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.564867    5189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 10:33:21.635165    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 10:33:21.635222    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:21.635330    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:21.635340    5189 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 10:33:22.017309    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 10:33:22.017322    5189 machine.go:96] duration metric: took 886.330125ms to provisionDockerMachine
	I0914 10:33:22.017329    5189 start.go:293] postStartSetup for "stopped-upgrade-130000" (driver="qemu2")
	I0914 10:33:22.017336    5189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 10:33:22.017408    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 10:33:22.017417    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:22.056805    5189 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 10:33:22.058216    5189 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 10:33:22.058223    5189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/addons for local assets ...
	I0914 10:33:22.058518    5189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19643-1079/.minikube/files for local assets ...
	I0914 10:33:22.058668    5189 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem -> 16032.pem in /etc/ssl/certs
	I0914 10:33:22.058799    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 10:33:22.061432    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:33:22.068074    5189 start.go:296] duration metric: took 50.742209ms for postStartSetup
	I0914 10:33:22.068088    5189 fix.go:56] duration metric: took 21.039633666s for fixHost
	I0914 10:33:22.068129    5189 main.go:141] libmachine: Using SSH client type: native
	I0914 10:33:22.068228    5189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104591190] 0x1045939d0 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0914 10:33:22.068232    5189 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 10:33:22.135582    5189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726335202.269895962
	
	I0914 10:33:22.135591    5189 fix.go:216] guest clock: 1726335202.269895962
	I0914 10:33:22.135598    5189 fix.go:229] Guest: 2024-09-14 10:33:22.269895962 -0700 PDT Remote: 2024-09-14 10:33:22.06809 -0700 PDT m=+21.149973417 (delta=201.805962ms)
	I0914 10:33:22.135610    5189 fix.go:200] guest clock delta is within tolerance: 201.805962ms
	I0914 10:33:22.135613    5189 start.go:83] releasing machines lock for "stopped-upgrade-130000", held for 21.107168583s
	I0914 10:33:22.135682    5189 ssh_runner.go:195] Run: cat /version.json
	I0914 10:33:22.135695    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:33:22.135682    5189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 10:33:22.135746    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	W0914 10:33:22.136395    5189 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50483: connect: connection refused
	I0914 10:33:22.136454    5189 retry.go:31] will retry after 350.599657ms: dial tcp [::1]:50483: connect: connection refused
	W0914 10:33:22.171051    5189 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 10:33:22.171113    5189 ssh_runner.go:195] Run: systemctl --version
	I0914 10:33:22.172889    5189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 10:33:22.174575    5189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 10:33:22.174599    5189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 10:33:22.177514    5189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 10:33:22.182497    5189 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 10:33:22.182505    5189 start.go:495] detecting cgroup driver to use...
	I0914 10:33:22.182583    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:33:22.189151    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0914 10:33:22.192667    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 10:33:22.195741    5189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 10:33:22.195768    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 10:33:22.198488    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:33:22.201713    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 10:33:22.204922    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 10:33:22.208176    5189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 10:33:22.211076    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 10:33:22.213898    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 10:33:22.217285    5189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 10:33:22.220873    5189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 10:33:22.223910    5189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 10:33:22.226445    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:22.313299    5189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 10:33:22.319493    5189 start.go:495] detecting cgroup driver to use...
	I0914 10:33:22.319566    5189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 10:33:22.324922    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:33:22.330464    5189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 10:33:22.340864    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 10:33:22.345257    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 10:33:22.349989    5189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 10:33:22.397495    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 10:33:22.403023    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 10:33:22.408204    5189 ssh_runner.go:195] Run: which cri-dockerd
	I0914 10:33:22.409371    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 10:33:22.412260    5189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 10:33:22.417183    5189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 10:33:22.495559    5189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 10:33:22.575028    5189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 10:33:22.575099    5189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 10:33:22.580240    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:22.655581    5189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:33:23.811822    5189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156273542s)
	I0914 10:33:23.811891    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 10:33:23.816337    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:33:23.820705    5189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 10:33:23.894215    5189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 10:33:23.975072    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:24.034352    5189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 10:33:24.040270    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 10:33:24.044760    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:24.123492    5189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 10:33:24.163206    5189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 10:33:24.163314    5189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 10:33:24.165844    5189 start.go:563] Will wait 60s for crictl version
	I0914 10:33:24.165889    5189 ssh_runner.go:195] Run: which crictl
	I0914 10:33:24.167244    5189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 10:33:24.181349    5189 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0914 10:33:24.181446    5189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:33:24.196989    5189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 10:33:24.217294    5189 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0914 10:33:24.217381    5189 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0914 10:33:24.218625    5189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 10:33:24.222075    5189 kubeadm.go:883] updating cluster {Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0914 10:33:24.222122    5189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0914 10:33:24.222171    5189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:33:24.237150    5189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:33:24.237158    5189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:33:24.237215    5189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:33:24.240619    5189 ssh_runner.go:195] Run: which lz4
	I0914 10:33:24.241892    5189 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 10:33:24.243160    5189 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 10:33:24.243170    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0914 10:33:25.169792    5189 docker.go:649] duration metric: took 927.994834ms to copy over tarball
	I0914 10:33:25.169857    5189 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 10:33:26.324858    5189 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155035959s)
	I0914 10:33:26.324873    5189 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 10:33:26.341045    5189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 10:33:26.344609    5189 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0914 10:33:26.349726    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:26.427489    5189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 10:33:28.382311    5189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.954878959s)
	I0914 10:33:28.382507    5189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 10:33:28.395494    5189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 10:33:28.395502    5189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0914 10:33:28.395509    5189 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 10:33:28.407109    5189 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:28.407887    5189 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.409043    5189 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:28.409207    5189 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.410238    5189 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.410484    5189 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.411378    5189 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.412597    5189 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.412643    5189 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.412690    5189 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 10:33:28.413696    5189 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.414054    5189 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.415463    5189 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 10:33:28.415500    5189 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.416432    5189 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.417157    5189 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.832130    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.843579    5189 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0914 10:33:28.843604    5189 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.843671    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0914 10:33:28.844017    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.856689    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.861527    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.862398    5189 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0914 10:33:28.862415    5189 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.862451    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0914 10:33:28.862540    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0914 10:33:28.865072    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 10:33:28.869951    5189 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0914 10:33:28.869972    5189 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.870037    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0914 10:33:28.884139    5189 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0914 10:33:28.884158    5189 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.884235    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0914 10:33:28.886532    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0914 10:33:28.895283    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0914 10:33:28.895407    5189 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0914 10:33:28.895423    5189 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0914 10:33:28.895479    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0914 10:33:28.905314    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0914 10:33:28.910842    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 10:33:28.910954    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 10:33:28.912646    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0914 10:33:28.912658    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0914 10:33:28.920545    5189 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 10:33:28.920554    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0914 10:33:28.923333    5189 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 10:33:28.923487    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.936160    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.961876    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0914 10:33:28.961924    5189 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0914 10:33:28.961941    5189 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.961971    5189 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0914 10:33:28.961981    5189 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.962005    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 10:33:28.962016    5189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0914 10:33:28.971829    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0914 10:33:28.973083    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 10:33:28.973208    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:33:28.974818    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0914 10:33:28.974828    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0914 10:33:29.016322    5189 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 10:33:29.016334    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0914 10:33:29.052491    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0914 10:33:29.248435    5189 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 10:33:29.248667    5189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.265962    5189 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 10:33:29.265990    5189 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.266086    5189 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:33:29.284069    5189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 10:33:29.284415    5189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:33:29.285936    5189 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0914 10:33:29.285954    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0914 10:33:29.317744    5189 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 10:33:29.317757    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0914 10:33:29.554246    5189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 10:33:29.554281    5189 cache_images.go:92] duration metric: took 1.158814875s to LoadCachedImages
	W0914 10:33:29.554318    5189 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0914 10:33:29.554328    5189 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0914 10:33:29.554379    5189 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-130000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 10:33:29.554462    5189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 10:33:29.567967    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:33:29.567977    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:33:29.567982    5189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 10:33:29.567990    5189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-130000 NodeName:stopped-upgrade-130000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 10:33:29.568052    5189 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-130000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 10:33:29.568104    5189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0914 10:33:29.571280    5189 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 10:33:29.571312    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 10:33:29.573934    5189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 10:33:29.578954    5189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 10:33:29.583670    5189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 10:33:29.589248    5189 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0914 10:33:29.590469    5189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 10:33:29.593772    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:33:29.654335    5189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:33:29.664452    5189 certs.go:68] Setting up /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000 for IP: 10.0.2.15
	I0914 10:33:29.664461    5189 certs.go:194] generating shared ca certs ...
	I0914 10:33:29.664470    5189 certs.go:226] acquiring lock for ca certs: {Name:mk7a785a7c5445527aceab92dcaa64cad76e8086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.664627    5189 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key
	I0914 10:33:29.664679    5189 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key
	I0914 10:33:29.664686    5189 certs.go:256] generating profile certs ...
	I0914 10:33:29.664765    5189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key
	I0914 10:33:29.664783    5189 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c
	I0914 10:33:29.664792    5189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0914 10:33:29.849503    5189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c ...
	I0914 10:33:29.849527    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c: {Name:mkf3e51e13810059867d19fbec340487cd9b4a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.851226    5189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c ...
	I0914 10:33:29.851238    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c: {Name:mke6a4e61bc20a372cdee59dad6d1444a3dde507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.851386    5189 certs.go:381] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt.74bfbd6c -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt
	I0914 10:33:29.851533    5189 certs.go:385] copying /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key.74bfbd6c -> /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key
	I0914 10:33:29.851696    5189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.key
	I0914 10:33:29.851836    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem (1338 bytes)
	W0914 10:33:29.851867    5189 certs.go:480] ignoring /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603_empty.pem, impossibly tiny 0 bytes
	I0914 10:33:29.851874    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 10:33:29.851894    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem (1078 bytes)
	I0914 10:33:29.851912    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem (1123 bytes)
	I0914 10:33:29.851930    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/key.pem (1675 bytes)
	I0914 10:33:29.852260    5189 certs.go:484] found cert: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem (1708 bytes)
	I0914 10:33:29.852635    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 10:33:29.860065    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 10:33:29.866618    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 10:33:29.873540    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 10:33:29.880925    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 10:33:29.888608    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 10:33:29.895778    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 10:33:29.902366    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 10:33:29.909241    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/ssl/certs/16032.pem --> /usr/share/ca-certificates/16032.pem (1708 bytes)
	I0914 10:33:29.916511    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 10:33:29.923420    5189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/1603.pem --> /usr/share/ca-certificates/1603.pem (1338 bytes)
	I0914 10:33:29.930052    5189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 10:33:29.935184    5189 ssh_runner.go:195] Run: openssl version
	I0914 10:33:29.937050    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 10:33:29.940401    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.941805    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.941829    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 10:33:29.943611    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 10:33:29.946304    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1603.pem && ln -fs /usr/share/ca-certificates/1603.pem /etc/ssl/certs/1603.pem"
	I0914 10:33:29.949402    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.950810    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 16:59 /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.950845    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1603.pem
	I0914 10:33:29.952557    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1603.pem /etc/ssl/certs/51391683.0"
	I0914 10:33:29.955739    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16032.pem && ln -fs /usr/share/ca-certificates/16032.pem /etc/ssl/certs/16032.pem"
	I0914 10:33:29.958507    5189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.959911    5189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 16:59 /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.959932    5189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16032.pem
	I0914 10:33:29.961740    5189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16032.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 10:33:29.965205    5189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 10:33:29.966752    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 10:33:29.968556    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 10:33:29.970402    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 10:33:29.972287    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 10:33:29.974195    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 10:33:29.975990    5189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 10:33:29.977716    5189 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50518 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0914 10:33:29.977789    5189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:33:29.988055    5189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 10:33:29.991121    5189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 10:33:29.991129    5189 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 10:33:29.991157    5189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 10:33:29.994594    5189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 10:33:29.994901    5189 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-130000" does not appear in /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:33:29.995026    5189 kubeconfig.go:62] /Users/jenkins/minikube-integration/19643-1079/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-130000" cluster setting kubeconfig missing "stopped-upgrade-130000" context setting]
	I0914 10:33:29.995222    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:33:29.995731    5189 kapi.go:59] client config for stopped-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:33:29.996066    5189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 10:33:29.998836    5189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-130000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0914 10:33:29.998844    5189 kubeadm.go:1160] stopping kube-system containers ...
	I0914 10:33:29.998892    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 10:33:30.010033    5189 docker.go:483] Stopping containers: [bc0eb1fe6478 f2165e8cce8d ea8a24c9014a ccbe87febee7 bedcedf78c08 536e693fe537 5b995c5ba76a 8fe86898c11f]
	I0914 10:33:30.010116    5189 ssh_runner.go:195] Run: docker stop bc0eb1fe6478 f2165e8cce8d ea8a24c9014a ccbe87febee7 bedcedf78c08 536e693fe537 5b995c5ba76a 8fe86898c11f
	I0914 10:33:30.021947    5189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 10:33:30.027838    5189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:33:30.030639    5189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:33:30.030644    5189 kubeadm.go:157] found existing configuration files:
	
	I0914 10:33:30.030670    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf
	I0914 10:33:30.033276    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:33:30.033304    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:33:30.036382    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf
	I0914 10:33:30.038888    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:33:30.038919    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:33:30.041478    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf
	I0914 10:33:30.044447    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:33:30.044482    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:33:30.047236    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf
	I0914 10:33:30.049734    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:33:30.049757    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:33:30.052724    5189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:33:30.055641    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.079674    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.547604    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.679675    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.702283    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 10:33:30.730054    5189 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:33:30.730129    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:31.231837    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:31.732166    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:33:31.736805    5189 api_server.go:72] duration metric: took 1.006793584s to wait for apiserver process to appear ...
	I0914 10:33:31.736816    5189 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:33:31.736825    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:36.738742    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:36.738785    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:41.738897    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:41.738981    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:46.739440    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:46.739515    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:51.739982    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:51.740003    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:33:56.740593    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:33:56.740729    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:01.742494    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:01.742560    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:06.744071    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:06.744136    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:11.745964    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:11.745988    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:16.747980    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:16.748017    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:21.750191    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:21.750288    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:26.751519    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:26.751539    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:31.753519    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:31.753671    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:31.769519    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:31.769609    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:31.782229    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:31.782325    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:31.793288    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:31.793361    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:31.803409    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:31.803484    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:31.813553    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:31.813638    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:31.825398    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:31.825483    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:31.835618    5189 logs.go:276] 0 containers: []
	W0914 10:34:31.835632    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:31.835702    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:31.846208    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:31.846224    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:31.846229    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:31.887445    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:31.887457    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:31.902250    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:31.902261    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:31.916245    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:31.916261    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:31.930365    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:31.930374    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:31.942268    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:31.942280    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:31.980733    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:31.980745    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:32.059310    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:32.059326    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:32.074472    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:32.074483    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:32.089157    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:32.089167    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:32.100486    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:32.100500    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:32.115148    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:32.115159    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:32.119469    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:32.119476    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:32.131051    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:32.131061    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:32.142531    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:32.142542    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:32.162743    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:32.162753    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:32.174270    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:32.174280    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:34.700269    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:39.702256    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:39.702418    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:39.715234    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:39.715330    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:39.726447    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:39.726535    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:39.737256    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:39.737339    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:39.748438    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:39.748533    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:39.759190    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:39.759280    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:39.769873    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:39.769954    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:39.780714    5189 logs.go:276] 0 containers: []
	W0914 10:34:39.780725    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:39.780794    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:39.791505    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:39.791524    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:39.791529    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:39.816681    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:39.816688    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:39.830365    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:39.830379    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:39.868043    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:39.868060    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:39.881977    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:39.881988    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:39.896885    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:39.896899    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:39.908440    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:39.908451    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:39.927856    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:39.927871    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:39.939951    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:39.939964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:39.956283    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:39.956299    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:39.995998    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:39.996008    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:40.007903    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:40.007913    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:40.021476    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:40.021487    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:40.033329    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:40.033338    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:40.072445    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:40.072455    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:40.076458    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:40.076465    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:40.087361    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:40.087373    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:42.603405    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:47.604951    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:47.605308    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:47.632173    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:47.632330    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:47.652058    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:47.652145    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:47.665530    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:47.665621    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:47.677009    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:47.677091    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:47.687676    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:47.687756    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:47.698609    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:47.698686    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:47.708999    5189 logs.go:276] 0 containers: []
	W0914 10:34:47.709011    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:47.709085    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:47.723969    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:47.723985    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:47.723990    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:47.735535    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:47.735545    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:47.754054    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:47.754065    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:47.779307    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:47.779318    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:47.818534    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:47.818547    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:47.830534    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:47.830546    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:47.848587    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:47.848598    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:47.867505    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:47.867516    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:47.882656    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:47.882666    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:47.895055    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:47.895068    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:47.907835    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:47.907845    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:47.919527    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:47.919537    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:47.957446    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:47.957455    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:47.961783    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:47.961790    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:47.996611    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:47.996624    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:48.012956    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:48.012967    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:48.027023    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:48.027036    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:50.541507    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:34:55.543681    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:34:55.543852    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:34:55.555180    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:34:55.555259    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:34:55.565804    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:34:55.565889    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:34:55.576241    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:34:55.576326    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:34:55.586839    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:34:55.586912    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:34:55.596914    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:34:55.596990    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:34:55.607251    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:34:55.607328    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:34:55.617285    5189 logs.go:276] 0 containers: []
	W0914 10:34:55.617305    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:34:55.617378    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:34:55.627731    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:34:55.627749    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:34:55.627755    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:34:55.639440    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:34:55.639452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:34:55.656727    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:34:55.656737    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:34:55.669504    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:34:55.669518    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:34:55.693752    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:34:55.693761    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:34:55.698333    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:34:55.698342    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:34:55.712187    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:34:55.712197    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:34:55.727078    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:34:55.727087    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:34:55.751817    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:34:55.751826    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:34:55.762680    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:34:55.762691    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:34:55.774301    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:34:55.774310    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:34:55.811751    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:34:55.811765    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:34:55.851332    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:34:55.851343    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:34:55.865673    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:34:55.865686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:34:55.880235    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:34:55.880245    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:34:55.923496    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:34:55.923506    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:34:55.934450    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:34:55.934462    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:34:58.446221    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:03.448717    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:03.449090    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:03.480605    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:03.480753    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:03.500255    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:03.500357    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:03.514071    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:03.514168    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:03.526125    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:03.526210    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:03.536803    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:03.536888    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:03.547182    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:03.547258    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:03.556927    5189 logs.go:276] 0 containers: []
	W0914 10:35:03.556941    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:03.557012    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:03.567959    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:03.567978    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:03.567984    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:03.584303    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:03.584313    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:03.598986    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:03.598994    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:03.613192    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:03.613204    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:03.627369    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:03.627382    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:03.641395    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:03.641405    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:03.659065    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:03.659074    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:03.670720    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:03.670735    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:03.682245    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:03.682255    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:03.706340    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:03.706349    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:03.718047    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:03.718057    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:03.753963    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:03.753975    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:03.766562    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:03.766573    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:03.804423    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:03.804440    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:03.808964    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:03.808979    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:03.848378    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:03.848391    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:03.859821    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:03.859833    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:06.373673    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:11.375840    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:11.376120    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:11.401222    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:11.401372    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:11.417944    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:11.418046    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:11.430870    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:11.430940    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:11.442269    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:11.442339    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:11.453257    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:11.453329    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:11.464071    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:11.464133    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:11.478374    5189 logs.go:276] 0 containers: []
	W0914 10:35:11.478387    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:11.478456    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:11.489054    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:11.489070    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:11.489076    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:11.503164    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:11.503177    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:11.514390    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:11.514402    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:11.526224    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:11.526236    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:11.538812    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:11.538827    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:11.575605    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:11.575614    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:11.579630    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:11.579639    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:11.594556    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:11.594565    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:11.606987    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:11.606998    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:11.627961    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:11.627970    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:11.642285    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:11.642300    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:11.698676    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:11.698688    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:11.737609    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:11.737619    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:11.752113    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:11.752125    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:11.766893    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:11.766904    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:11.778238    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:11.778249    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:11.802695    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:11.802708    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:14.316669    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:19.318952    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:19.319152    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:19.335449    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:19.335547    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:19.352032    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:19.352119    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:19.362643    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:19.362734    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:19.373178    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:19.373265    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:19.383502    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:19.383584    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:19.394072    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:19.394166    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:19.407334    5189 logs.go:276] 0 containers: []
	W0914 10:35:19.407347    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:19.407424    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:19.418212    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:19.418238    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:19.418244    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:19.455949    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:19.455959    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:19.469874    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:19.469885    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:19.487998    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:19.488010    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:19.500133    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:19.500143    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:19.538190    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:19.538198    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:19.542579    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:19.542589    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:19.553452    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:19.553465    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:19.570946    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:19.570955    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:19.582001    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:19.582012    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:19.607029    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:19.607043    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:19.652980    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:19.652991    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:19.670849    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:19.670861    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:19.694407    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:19.694423    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:19.712751    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:19.712761    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:19.729866    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:19.729875    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:19.741307    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:19.741316    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:22.255994    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:27.258272    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:27.258550    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:27.282496    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:27.282653    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:27.298438    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:27.298546    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:27.311647    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:27.311745    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:27.322261    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:27.322349    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:27.333151    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:27.333230    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:27.344112    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:27.344199    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:27.354267    5189 logs.go:276] 0 containers: []
	W0914 10:35:27.354283    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:27.354355    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:27.370290    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:27.370315    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:27.370320    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:27.374752    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:27.374758    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:27.389157    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:27.389168    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:27.414225    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:27.414234    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:27.425918    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:27.425928    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:27.439129    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:27.439142    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:27.451038    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:27.451050    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:27.485548    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:27.485560    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:27.504757    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:27.504773    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:27.545404    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:27.545415    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:27.556775    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:27.556785    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:27.569352    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:27.569367    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:27.583504    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:27.583514    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:27.598235    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:27.598246    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:27.617554    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:27.617563    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:27.656626    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:27.656635    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:27.668140    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:27.668153    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:30.181729    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:35.183872    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:35.184091    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:35.200275    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:35.200378    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:35.213844    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:35.213939    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:35.232765    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:35.232848    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:35.252423    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:35.252512    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:35.268125    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:35.268215    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:35.278898    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:35.278982    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:35.289654    5189 logs.go:276] 0 containers: []
	W0914 10:35:35.289664    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:35.289728    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:35.300413    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:35.300432    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:35.300438    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:35.323417    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:35.323428    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:35.359246    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:35.359256    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:35.397081    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:35.397092    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:35.414458    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:35.414468    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:35.427626    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:35.427637    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:35.439160    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:35.439172    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:35.450419    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:35.450431    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:35.454918    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:35.454927    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:35.468668    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:35.468681    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:35.488363    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:35.488376    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:35.500487    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:35.500497    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:35.511704    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:35.511715    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:35.525420    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:35.525430    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:35.562441    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:35.562449    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:35.576084    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:35.576095    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:35.587897    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:35.587908    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:38.101478    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:43.103715    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:43.103903    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:43.120069    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:43.120176    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:43.132528    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:43.132618    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:43.143036    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:43.143130    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:43.153861    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:43.153943    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:43.164402    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:43.164488    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:43.174632    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:43.174712    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:43.184984    5189 logs.go:276] 0 containers: []
	W0914 10:35:43.184995    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:43.185069    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:43.195736    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:43.195753    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:43.195759    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:43.199868    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:43.199878    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:43.233975    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:43.233985    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:43.246157    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:43.246169    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:43.261436    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:43.261446    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:43.272831    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:43.272840    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:43.286748    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:43.286763    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:43.324197    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:43.324208    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:43.338404    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:43.338416    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:43.349719    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:43.349732    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:43.373163    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:43.373171    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:43.384826    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:43.384841    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:43.421482    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:43.421489    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:43.435559    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:43.435570    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:43.446675    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:43.446686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:43.461094    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:43.461104    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:43.479193    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:43.479202    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:45.992373    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:50.994509    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:50.994667    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:51.006033    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:51.006116    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:51.018662    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:51.018749    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:51.032204    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:51.032292    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:51.042627    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:51.042716    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:51.053574    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:51.053658    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:51.064699    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:51.064790    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:51.075106    5189 logs.go:276] 0 containers: []
	W0914 10:35:51.075116    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:51.075189    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:51.087583    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:51.087601    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:51.087606    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:51.110034    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:51.110042    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:35:51.121856    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:51.121867    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:51.155894    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:51.155905    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:51.169701    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:51.169713    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:51.206992    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:51.207003    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:51.230361    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:51.230371    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:51.269994    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:51.270007    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:51.281229    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:51.281240    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:51.295955    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:51.295964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:51.307705    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:51.307716    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:51.321391    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:51.321425    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:51.333207    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:51.333221    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:51.344293    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:51.344303    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:51.348738    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:51.348746    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:51.362500    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:51.362512    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:51.376883    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:51.376894    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:53.890840    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:35:58.891001    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:35:58.891131    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:35:58.902564    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:35:58.902653    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:35:58.913372    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:35:58.913461    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:35:58.932319    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:35:58.932405    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:35:58.942626    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:35:58.942708    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:35:58.952913    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:35:58.952996    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:35:58.964039    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:35:58.964125    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:35:58.974062    5189 logs.go:276] 0 containers: []
	W0914 10:35:58.974078    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:35:58.974154    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:35:58.984544    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:35:58.984562    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:35:58.984567    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:35:58.995895    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:35:58.995903    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:35:59.020335    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:35:59.020346    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:35:59.058767    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:35:59.058778    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:35:59.072519    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:35:59.072529    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:35:59.084227    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:35:59.084237    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:35:59.098653    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:35:59.098668    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:35:59.110247    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:35:59.110259    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:35:59.147092    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:35:59.147105    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:35:59.164236    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:35:59.164246    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:35:59.179808    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:35:59.179819    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:35:59.184068    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:35:59.184077    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:35:59.221818    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:35:59.221829    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:35:59.235908    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:35:59.235918    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:35:59.246903    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:35:59.246914    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:35:59.264191    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:35:59.264201    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:35:59.277584    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:35:59.277594    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:01.792506    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:06.794727    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:06.794882    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:06.806620    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:06.806709    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:06.818148    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:06.818248    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:06.830270    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:06.830361    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:06.840422    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:06.840514    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:06.850889    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:06.850969    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:06.861635    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:06.861714    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:06.872011    5189 logs.go:276] 0 containers: []
	W0914 10:36:06.872023    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:06.872099    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:06.886437    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:06.886454    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:06.886459    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:06.897843    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:06.897856    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:06.910136    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:06.910149    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:06.948970    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:06.948978    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:06.991054    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:06.991068    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:07.005090    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:07.005103    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:07.018786    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:07.018796    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:07.031301    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:07.031312    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:07.050991    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:07.051003    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:07.063147    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:07.063163    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:07.106196    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:07.106210    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:07.120842    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:07.120853    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:07.141588    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:07.141600    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:07.155901    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:07.155916    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:07.179723    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:07.179735    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:07.183884    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:07.183890    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:07.198438    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:07.198452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:09.713979    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:14.716216    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:14.716392    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:14.728374    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:14.728473    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:14.742979    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:14.743071    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:14.753670    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:14.753758    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:14.765104    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:14.765192    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:14.777768    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:14.777848    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:14.790162    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:14.790247    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:14.801287    5189 logs.go:276] 0 containers: []
	W0914 10:36:14.801298    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:14.801370    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:14.812233    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:14.812251    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:14.812256    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:14.852709    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:14.852723    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:14.867785    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:14.867797    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:14.885694    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:14.885709    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:14.897365    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:14.897381    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:14.931784    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:14.931796    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:14.945878    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:14.945891    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:14.957866    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:14.957877    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:14.971825    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:14.971838    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:15.008564    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:15.008572    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:15.012392    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:15.012399    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:15.026675    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:15.026686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:15.038605    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:15.038615    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:15.049408    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:15.049418    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:15.063427    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:15.063437    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:15.078950    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:15.078959    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:15.102805    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:15.102813    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:17.616176    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:22.617383    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:22.617765    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:22.645275    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:22.645428    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:22.663669    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:22.663768    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:22.677130    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:22.677217    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:22.688603    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:22.688684    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:22.699678    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:22.699768    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:22.710357    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:22.710438    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:22.720256    5189 logs.go:276] 0 containers: []
	W0914 10:36:22.720267    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:22.720335    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:22.731427    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:22.731446    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:22.731452    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:22.771618    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:22.771627    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:22.787180    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:22.787191    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:22.808259    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:22.808271    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:22.821575    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:22.821587    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:22.825981    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:22.825993    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:22.860820    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:22.860835    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:22.872675    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:22.872686    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:22.887646    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:22.887656    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:22.899477    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:22.899490    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:22.922174    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:22.922181    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:22.933907    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:22.933918    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:22.972080    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:22.972088    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:22.989846    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:22.989858    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:23.001725    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:23.001735    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:23.015748    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:23.015761    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:23.034117    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:23.034126    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:25.547506    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:30.549515    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:30.549754    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:30.568909    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:30.569025    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:30.584547    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:30.584678    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:30.596862    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:30.596949    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:30.607994    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:30.608077    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:30.618711    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:30.618794    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:30.629540    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:30.629620    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:30.640146    5189 logs.go:276] 0 containers: []
	W0914 10:36:30.640161    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:30.640233    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:30.651243    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:30.651262    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:30.651267    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:30.655961    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:30.655970    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:30.670450    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:30.670460    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:30.685861    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:30.685873    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:30.697879    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:30.697890    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:30.709723    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:30.709732    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:30.748260    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:30.748275    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:30.785427    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:30.785439    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:30.809345    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:30.809355    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:30.825075    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:30.825087    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:30.838735    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:30.838751    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:30.853032    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:30.853045    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:30.864583    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:30.864597    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:30.881421    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:30.881432    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:30.893137    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:30.893148    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:30.904818    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:30.904832    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:30.920753    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:30.920762    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:33.461370    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:38.463388    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:38.463752    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:38.496583    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:38.496713    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:38.512714    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:38.512811    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:38.525531    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:38.525616    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:38.536554    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:38.536629    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:38.546810    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:38.546896    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:38.558208    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:38.558293    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:38.568428    5189 logs.go:276] 0 containers: []
	W0914 10:36:38.568441    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:38.568512    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:38.579201    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:38.579218    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:38.579224    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:38.584095    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:38.584103    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:38.604249    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:38.604259    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:38.615862    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:38.615873    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:38.627914    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:38.627925    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:38.641922    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:38.641932    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:38.657641    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:38.657651    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:38.696133    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:38.696144    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:38.710851    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:38.710861    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:38.722112    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:38.722123    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:38.735768    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:38.735781    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:38.759835    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:38.759846    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:38.771980    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:38.771993    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:38.784263    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:38.784274    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:38.821958    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:38.821971    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:38.855550    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:38.855562    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:38.872746    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:38.872755    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:41.385669    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:46.385741    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:46.385953    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:46.403653    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:46.403760    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:46.420963    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:46.421055    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:46.431555    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:46.431639    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:46.441900    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:46.441983    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:46.452238    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:46.452315    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:46.462770    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:46.462869    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:46.472883    5189 logs.go:276] 0 containers: []
	W0914 10:36:46.472895    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:46.472965    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:46.483528    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:46.483546    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:46.483551    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:46.498073    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:46.498083    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:46.502258    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:46.502268    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:46.537497    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:46.537508    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:46.561059    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:46.561067    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:46.572473    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:46.572482    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:46.588798    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:46.588808    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:46.600584    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:46.600595    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:46.615377    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:46.615387    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:46.626604    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:46.626614    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:46.637504    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:46.637514    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:46.662334    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:46.662347    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:46.674579    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:46.674591    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:46.714215    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:46.714226    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:46.757224    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:46.757240    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:46.771107    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:46.771118    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:46.787026    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:46.787037    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:49.300732    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:36:54.302837    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:36:54.303176    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:36:54.328047    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:36:54.328141    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:36:54.345338    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:36:54.345422    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:36:54.358009    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:36:54.358084    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:36:54.370403    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:36:54.370485    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:36:54.380859    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:36:54.380939    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:36:54.392232    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:36:54.392306    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:36:54.402621    5189 logs.go:276] 0 containers: []
	W0914 10:36:54.402633    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:36:54.402705    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:36:54.413193    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:36:54.413210    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:36:54.413216    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:36:54.427062    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:36:54.427076    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:36:54.439304    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:36:54.439317    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:36:54.480085    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:36:54.480098    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:36:54.484617    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:36:54.484625    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:36:54.520002    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:36:54.520013    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:36:54.534602    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:36:54.534617    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:36:54.545868    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:36:54.545880    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:36:54.568114    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:36:54.568128    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:36:54.582040    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:36:54.582055    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:36:54.604771    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:36:54.604778    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:36:54.616214    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:36:54.616228    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:36:54.654766    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:36:54.654776    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:36:54.670566    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:36:54.670575    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:36:54.685105    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:36:54.685116    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:36:54.702947    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:36:54.702960    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:36:54.718956    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:36:54.718964    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:36:57.230536    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:02.230637    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:02.230981    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:02.259281    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:02.259428    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:02.277421    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:02.277524    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:02.290901    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:02.290996    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:02.302773    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:02.302859    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:02.313042    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:02.313129    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:02.323367    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:02.323449    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:02.337234    5189 logs.go:276] 0 containers: []
	W0914 10:37:02.337249    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:02.337318    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:02.347751    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:02.347769    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:02.347774    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:02.362799    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:02.362810    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:02.374569    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:02.374582    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:02.396270    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:02.396277    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:02.400279    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:02.400293    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:02.419328    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:02.419339    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:02.439027    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:02.439036    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:02.451250    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:02.451260    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:02.489194    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:02.489206    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:02.500806    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:02.500816    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:02.513340    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:02.513350    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:02.526028    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:02.526039    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:02.564407    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:02.564418    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:02.581537    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:02.581547    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:02.598566    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:02.598576    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:02.635883    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:02.635891    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:02.647905    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:02.647914    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:05.163512    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:10.165456    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:10.165775    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:10.194243    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:10.194392    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:10.212445    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:10.212565    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:10.226089    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:10.226185    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:10.237547    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:10.237639    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:10.247999    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:10.248083    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:10.259325    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:10.259409    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:10.269170    5189 logs.go:276] 0 containers: []
	W0914 10:37:10.269182    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:10.269246    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:10.279839    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:10.279855    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:10.279860    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:10.291485    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:10.291497    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:10.315199    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:10.315210    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:10.353491    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:10.353501    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:10.357672    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:10.357678    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:10.395593    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:10.395603    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:10.410044    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:10.410056    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:10.421734    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:10.421745    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:10.456356    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:10.456368    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:10.468145    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:10.468155    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:10.482204    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:10.482217    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:10.493387    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:10.493397    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:10.510820    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:10.510832    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:10.524731    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:10.524742    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:10.538826    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:10.538835    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:10.552882    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:10.552892    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:10.564058    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:10.564069    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:13.084156    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:18.086308    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:18.086803    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:18.118918    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:18.119078    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:18.136655    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:18.136755    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:18.149653    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:18.149750    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:18.161903    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:18.161990    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:18.172902    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:18.172991    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:18.183681    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:18.183769    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:18.194058    5189 logs.go:276] 0 containers: []
	W0914 10:37:18.194073    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:18.194146    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:18.206191    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:18.206212    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:18.206218    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:18.220732    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:18.220742    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:18.237076    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:18.237092    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:18.260598    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:18.260608    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:18.282529    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:18.282538    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:18.296061    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:18.296070    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:18.313934    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:18.313943    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:18.325155    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:18.325167    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:18.336620    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:18.336629    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:18.340693    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:18.340702    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:18.355305    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:18.355317    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:18.366670    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:18.366686    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:18.379142    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:18.379155    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:18.413141    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:18.413157    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:18.427046    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:18.427059    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:18.442198    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:18.442213    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:18.478983    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:18.478990    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:21.019585    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:26.021608    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:26.021821    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:37:26.034407    5189 logs.go:276] 2 containers: [11f9ffdf6e43 ea8a24c9014a]
	I0914 10:37:26.034504    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:37:26.050207    5189 logs.go:276] 2 containers: [46f64762f77a bc0eb1fe6478]
	I0914 10:37:26.050296    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:37:26.060704    5189 logs.go:276] 1 containers: [d58d98b98ad4]
	I0914 10:37:26.060792    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:37:26.070843    5189 logs.go:276] 2 containers: [1b357929f298 f2165e8cce8d]
	I0914 10:37:26.070926    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:37:26.082490    5189 logs.go:276] 1 containers: [213b21806615]
	I0914 10:37:26.082571    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:37:26.094046    5189 logs.go:276] 2 containers: [72fbfd868f6c ccbe87febee7]
	I0914 10:37:26.094127    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:37:26.103995    5189 logs.go:276] 0 containers: []
	W0914 10:37:26.104007    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:37:26.104081    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:37:26.114767    5189 logs.go:276] 2 containers: [db8219ef9871 7588b357ac42]
	I0914 10:37:26.114790    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:37:26.114796    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:37:26.119719    5189 logs.go:123] Gathering logs for etcd [bc0eb1fe6478] ...
	I0914 10:37:26.119726    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0eb1fe6478"
	I0914 10:37:26.134425    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:37:26.134437    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:37:26.170490    5189 logs.go:123] Gathering logs for etcd [46f64762f77a] ...
	I0914 10:37:26.170504    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46f64762f77a"
	I0914 10:37:26.185422    5189 logs.go:123] Gathering logs for coredns [d58d98b98ad4] ...
	I0914 10:37:26.185433    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58d98b98ad4"
	I0914 10:37:26.197350    5189 logs.go:123] Gathering logs for kube-scheduler [f2165e8cce8d] ...
	I0914 10:37:26.197361    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2165e8cce8d"
	I0914 10:37:26.211420    5189 logs.go:123] Gathering logs for kube-proxy [213b21806615] ...
	I0914 10:37:26.211432    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213b21806615"
	I0914 10:37:26.223320    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:37:26.223332    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:37:26.246865    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:37:26.246876    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:37:26.260631    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:37:26.260640    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:37:26.299601    5189 logs.go:123] Gathering logs for kube-apiserver [11f9ffdf6e43] ...
	I0914 10:37:26.299610    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f9ffdf6e43"
	I0914 10:37:26.313307    5189 logs.go:123] Gathering logs for kube-controller-manager [72fbfd868f6c] ...
	I0914 10:37:26.313319    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72fbfd868f6c"
	I0914 10:37:26.330834    5189 logs.go:123] Gathering logs for kube-controller-manager [ccbe87febee7] ...
	I0914 10:37:26.330844    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccbe87febee7"
	I0914 10:37:26.345174    5189 logs.go:123] Gathering logs for storage-provisioner [db8219ef9871] ...
	I0914 10:37:26.345187    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8219ef9871"
	I0914 10:37:26.361106    5189 logs.go:123] Gathering logs for storage-provisioner [7588b357ac42] ...
	I0914 10:37:26.361120    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7588b357ac42"
	I0914 10:37:26.372319    5189 logs.go:123] Gathering logs for kube-apiserver [ea8a24c9014a] ...
	I0914 10:37:26.372331    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8a24c9014a"
	I0914 10:37:26.411054    5189 logs.go:123] Gathering logs for kube-scheduler [1b357929f298] ...
	I0914 10:37:26.411067    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b357929f298"
	I0914 10:37:28.927738    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:33.929813    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:33.929934    5189 kubeadm.go:597] duration metric: took 4m3.949047333s to restartPrimaryControlPlane
	W0914 10:37:33.930006    5189 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 10:37:33.930042    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0914 10:37:34.993706    5189 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063695708s)
	I0914 10:37:34.993782    5189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 10:37:34.998882    5189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 10:37:35.002129    5189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 10:37:35.005026    5189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 10:37:35.005032    5189 kubeadm.go:157] found existing configuration files:
	
	I0914 10:37:35.005059    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf
	I0914 10:37:35.007658    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 10:37:35.007681    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 10:37:35.010952    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf
	I0914 10:37:35.014234    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 10:37:35.014259    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 10:37:35.016974    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf
	I0914 10:37:35.019346    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 10:37:35.019375    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 10:37:35.022726    5189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf
	I0914 10:37:35.025815    5189 kubeadm.go:163] "https://control-plane.minikube.internal:50518" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50518 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 10:37:35.025844    5189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 10:37:35.028343    5189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 10:37:35.046524    5189 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0914 10:37:35.046556    5189 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 10:37:35.097232    5189 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 10:37:35.097295    5189 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 10:37:35.097347    5189 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 10:37:35.147467    5189 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 10:37:35.151591    5189 out.go:235]   - Generating certificates and keys ...
	I0914 10:37:35.151626    5189 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 10:37:35.151656    5189 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 10:37:35.151708    5189 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 10:37:35.151753    5189 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 10:37:35.151795    5189 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 10:37:35.151822    5189 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 10:37:35.151862    5189 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 10:37:35.151898    5189 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 10:37:35.151938    5189 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 10:37:35.151978    5189 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 10:37:35.151999    5189 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 10:37:35.152033    5189 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 10:37:35.335388    5189 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 10:37:35.387549    5189 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 10:37:35.479649    5189 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 10:37:35.740483    5189 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 10:37:35.769660    5189 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 10:37:35.770050    5189 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 10:37:35.770075    5189 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 10:37:35.853905    5189 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 10:37:35.862138    5189 out.go:235]   - Booting up control plane ...
	I0914 10:37:35.862194    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 10:37:35.862232    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 10:37:35.862269    5189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 10:37:35.862316    5189 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 10:37:35.862402    5189 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 10:37:40.856619    5189 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001897 seconds
	I0914 10:37:40.856720    5189 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 10:37:40.860298    5189 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 10:37:41.373171    5189 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 10:37:41.373593    5189 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-130000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 10:37:41.878294    5189 kubeadm.go:310] [bootstrap-token] Using token: r1mbrg.cr7msc60nic2b0om
	I0914 10:37:41.882077    5189 out.go:235]   - Configuring RBAC rules ...
	I0914 10:37:41.882137    5189 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 10:37:41.883939    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 10:37:41.889925    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 10:37:41.890802    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 10:37:41.891530    5189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 10:37:41.892460    5189 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 10:37:41.895579    5189 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 10:37:42.052341    5189 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 10:37:42.286430    5189 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 10:37:42.287041    5189 kubeadm.go:310] 
	I0914 10:37:42.287079    5189 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 10:37:42.287099    5189 kubeadm.go:310] 
	I0914 10:37:42.287150    5189 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 10:37:42.287157    5189 kubeadm.go:310] 
	I0914 10:37:42.287176    5189 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 10:37:42.287210    5189 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 10:37:42.287240    5189 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 10:37:42.287242    5189 kubeadm.go:310] 
	I0914 10:37:42.287290    5189 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 10:37:42.287294    5189 kubeadm.go:310] 
	I0914 10:37:42.287318    5189 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 10:37:42.287322    5189 kubeadm.go:310] 
	I0914 10:37:42.287349    5189 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 10:37:42.287387    5189 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 10:37:42.287424    5189 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 10:37:42.287429    5189 kubeadm.go:310] 
	I0914 10:37:42.287478    5189 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 10:37:42.287518    5189 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 10:37:42.287523    5189 kubeadm.go:310] 
	I0914 10:37:42.287565    5189 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r1mbrg.cr7msc60nic2b0om \
	I0914 10:37:42.287618    5189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 \
	I0914 10:37:42.287634    5189 kubeadm.go:310] 	--control-plane 
	I0914 10:37:42.287637    5189 kubeadm.go:310] 
	I0914 10:37:42.287675    5189 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 10:37:42.287678    5189 kubeadm.go:310] 
	I0914 10:37:42.287721    5189 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r1mbrg.cr7msc60nic2b0om \
	I0914 10:37:42.287775    5189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f2bcbe86b7524eabb66e32d65311e5f1e28ed403ce521627df0d2c85d84c574 
	I0914 10:37:42.288014    5189 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 10:37:42.288093    5189 cni.go:84] Creating CNI manager for ""
	I0914 10:37:42.288102    5189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:37:42.291776    5189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 10:37:42.303125    5189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 10:37:42.306083    5189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 10:37:42.311107    5189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 10:37:42.311157    5189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 10:37:42.311171    5189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-130000 minikube.k8s.io/updated_at=2024_09_14T10_37_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=stopped-upgrade-130000 minikube.k8s.io/primary=true
	I0914 10:37:42.353545    5189 kubeadm.go:1113] duration metric: took 42.432292ms to wait for elevateKubeSystemPrivileges
	I0914 10:37:42.353563    5189 ops.go:34] apiserver oom_adj: -16
	I0914 10:37:42.353568    5189 kubeadm.go:394] duration metric: took 4m12.386462042s to StartCluster
	I0914 10:37:42.353578    5189 settings.go:142] acquiring lock: {Name:mk7db576f28fda26cf1d7d854618889d7d4f8a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:37:42.353666    5189 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:37:42.354068    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/kubeconfig: {Name:mk2bfa274931cfcaab81c340801bce4006cf7459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:37:42.354248    5189 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:37:42.354260    5189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 10:37:42.354298    5189 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-130000"
	I0914 10:37:42.354306    5189 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-130000"
	W0914 10:37:42.354310    5189 addons.go:243] addon storage-provisioner should already be in state true
	I0914 10:37:42.354326    5189 host.go:66] Checking if "stopped-upgrade-130000" exists ...
	I0914 10:37:42.354330    5189 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-130000"
	I0914 10:37:42.354340    5189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-130000"
	I0914 10:37:42.354343    5189 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:37:42.355220    5189 kapi.go:59] client config for stopped-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/stopped-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19643-1079/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 10:37:42.355348    5189 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-130000"
	W0914 10:37:42.355353    5189 addons.go:243] addon default-storageclass should already be in state true
	I0914 10:37:42.355359    5189 host.go:66] Checking if "stopped-upgrade-130000" exists ...
	I0914 10:37:42.357934    5189 out.go:177] * Verifying Kubernetes components...
	I0914 10:37:42.358300    5189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 10:37:42.362178    5189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 10:37:42.362184    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:37:42.364962    5189 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 10:37:42.368976    5189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 10:37:42.373028    5189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:37:42.373036    5189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 10:37:42.373043    5189 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/stopped-upgrade-130000/id_rsa Username:docker}
	I0914 10:37:42.454440    5189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 10:37:42.460286    5189 api_server.go:52] waiting for apiserver process to appear ...
	I0914 10:37:42.460341    5189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 10:37:42.464059    5189 api_server.go:72] duration metric: took 109.804708ms to wait for apiserver process to appear ...
	I0914 10:37:42.464066    5189 api_server.go:88] waiting for apiserver healthz status ...
	I0914 10:37:42.464074    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:42.471163    5189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 10:37:42.539163    5189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 10:37:42.864986    5189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 10:37:42.865000    5189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 10:37:47.466003    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:47.466087    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:52.466334    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:52.466362    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:37:57.466489    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:37:57.466516    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:02.466772    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:02.466800    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:07.467269    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:07.467323    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:12.467995    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:12.468056    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0914 10:38:12.864902    5189 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0914 10:38:12.868706    5189 out.go:177] * Enabled addons: storage-provisioner
	I0914 10:38:12.878532    5189 addons.go:510] duration metric: took 30.525558166s for enable addons: enabled=[storage-provisioner]
	I0914 10:38:17.469072    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:17.469168    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:22.471178    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:22.471229    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:27.473185    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:27.473235    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:32.475322    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:32.475349    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:37.476195    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:37.476274    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:42.478492    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:42.478686    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:38:42.491332    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:38:42.491418    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:38:42.502561    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:38:42.502646    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:38:42.512734    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:38:42.512821    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:38:42.523708    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:38:42.523785    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:38:42.535326    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:38:42.535409    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:38:42.545623    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:38:42.545698    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:38:42.555305    5189 logs.go:276] 0 containers: []
	W0914 10:38:42.555316    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:38:42.555380    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:38:42.565769    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:38:42.565785    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:38:42.565790    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:38:42.577300    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:42.577313    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:42.601957    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:38:42.601968    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:42.613459    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:42.613471    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:42.618087    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:42.618094    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:42.652534    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:38:42.652547    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:38:42.664000    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:38:42.664011    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:38:42.675536    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:38:42.675549    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:38:42.690503    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:38:42.690512    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:38:42.704808    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:38:42.704819    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:38:42.722412    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:42.722421    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:42.758040    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:38:42.758052    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:38:42.772305    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:38:42.772316    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:38:45.288543    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:50.290965    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:50.291246    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:38:50.317672    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:38:50.317812    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:38:50.335539    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:38:50.335635    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:38:50.349155    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:38:50.349234    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:38:50.360173    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:38:50.360258    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:38:50.370598    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:38:50.370687    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:38:50.387445    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:38:50.387529    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:38:50.397528    5189 logs.go:276] 0 containers: []
	W0914 10:38:50.397541    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:38:50.397613    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:38:50.408842    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:38:50.408856    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:38:50.408862    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:38:50.420408    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:38:50.420417    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:38:50.438049    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:38:50.438059    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:38:50.449600    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:50.449617    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:50.487621    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:38:50.487634    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:38:50.502253    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:38:50.502263    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:38:50.516141    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:38:50.516153    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:38:50.527642    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:38:50.527652    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:38:50.544094    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:50.544103    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:50.579129    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:50.579140    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:50.583355    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:38:50.583364    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:38:50.594721    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:50.594737    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:50.618006    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:38:50.618016    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:38:53.131241    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:38:58.133852    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:38:58.134436    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:38:58.179519    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:38:58.179661    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:38:58.199098    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:38:58.199206    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:38:58.213742    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:38:58.213832    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:38:58.225951    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:38:58.226034    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:38:58.236423    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:38:58.236507    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:38:58.248007    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:38:58.248085    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:38:58.257932    5189 logs.go:276] 0 containers: []
	W0914 10:38:58.257946    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:38:58.258012    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:38:58.272582    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:38:58.272603    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:38:58.272609    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:38:58.305419    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:38:58.305427    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:38:58.309925    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:38:58.309934    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:38:58.334696    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:38:58.334706    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:38:58.374295    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:38:58.374312    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:38:58.388389    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:38:58.388400    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:38:58.402705    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:38:58.402721    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:38:58.414110    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:38:58.414120    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:38:58.425345    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:38:58.425355    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:38:58.440235    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:38:58.440244    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:38:58.451320    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:38:58.451329    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:38:58.468212    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:38:58.468222    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:38:58.479700    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:38:58.479709    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:00.993200    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:05.995743    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:05.996299    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:06.035417    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:06.035585    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:06.058003    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:06.058137    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:06.073147    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:06.073238    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:06.085204    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:06.085279    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:06.095300    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:06.095386    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:06.105534    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:06.105602    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:06.115782    5189 logs.go:276] 0 containers: []
	W0914 10:39:06.115796    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:06.115871    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:06.126292    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:06.126306    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:06.126311    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:06.130824    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:06.130835    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:06.146503    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:06.146516    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:06.159824    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:06.159834    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:06.171710    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:06.171720    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:06.189144    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:06.189159    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:06.201086    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:06.201099    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:06.224876    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:06.224886    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:06.236081    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:06.236094    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:06.270527    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:06.270538    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:06.310190    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:06.310202    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:06.329627    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:06.329637    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:06.343768    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:06.343780    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:08.865540    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:13.867717    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:13.868030    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:13.893324    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:13.893458    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:13.910455    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:13.910570    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:13.923349    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:13.923435    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:13.935655    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:13.935723    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:13.946042    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:13.946133    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:13.956482    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:13.956564    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:13.966676    5189 logs.go:276] 0 containers: []
	W0914 10:39:13.966688    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:13.966759    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:13.977267    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:13.977282    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:13.977287    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:13.991523    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:13.991531    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:14.006749    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:14.006758    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:14.018241    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:14.018253    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:14.035817    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:14.035828    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:14.071205    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:14.071214    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:14.075275    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:14.075280    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:14.086835    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:14.086846    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:14.102681    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:14.102692    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:14.115610    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:14.115621    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:14.140337    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:14.140352    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:14.152023    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:14.152037    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:14.191135    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:14.191146    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:16.707884    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:21.709639    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:21.709942    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:21.737056    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:21.737210    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:21.753971    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:21.754061    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:21.767469    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:21.767546    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:21.778538    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:21.778627    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:21.789346    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:21.789418    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:21.800817    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:21.800899    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:21.811001    5189 logs.go:276] 0 containers: []
	W0914 10:39:21.811014    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:21.811090    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:21.821449    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:21.821462    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:21.821468    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:21.833762    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:21.833770    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:21.846314    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:21.846323    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:21.857782    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:21.857796    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:21.872821    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:21.872832    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:21.886326    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:21.886338    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:21.898334    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:21.898347    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:21.910622    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:21.910636    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:21.925528    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:21.925540    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:21.960324    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:21.960334    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:21.965106    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:21.965113    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:22.003145    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:22.003158    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:22.022024    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:22.022035    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:24.547086    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:29.549568    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:29.550050    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:29.585232    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:29.585390    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:29.605423    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:29.605547    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:29.619818    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:29.619905    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:29.635380    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:29.635465    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:29.646555    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:29.646629    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:29.657537    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:29.657607    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:29.667723    5189 logs.go:276] 0 containers: []
	W0914 10:39:29.667734    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:29.667795    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:29.678233    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:29.678247    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:29.678252    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:29.693708    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:29.693722    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:29.713172    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:29.713183    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:29.738264    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:29.738273    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:29.778818    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:29.778831    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:29.797327    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:29.797338    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:29.810538    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:29.810551    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:29.825962    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:29.825975    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:29.841516    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:29.841526    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:29.854305    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:29.854315    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:29.886967    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:29.886976    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:29.890685    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:29.890692    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:29.905045    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:29.905058    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:32.422404    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:37.424989    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:37.425598    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:37.464531    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:37.464698    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:37.486175    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:37.486298    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:37.503591    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:37.503685    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:37.515364    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:37.515440    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:37.529988    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:37.530067    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:37.540528    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:37.540605    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:37.549973    5189 logs.go:276] 0 containers: []
	W0914 10:39:37.549983    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:37.550041    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:37.560738    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:37.560755    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:37.560762    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:37.597277    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:37.597288    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:37.613689    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:37.613700    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:37.636830    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:37.636838    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:37.647627    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:37.647636    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:37.665855    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:37.665866    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:37.699679    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:37.699688    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:37.704164    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:37.704172    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:37.718186    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:37.718196    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:37.733429    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:37.733444    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:37.745436    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:37.745445    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:37.756421    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:37.756431    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:37.768135    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:37.768148    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:40.281722    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:45.283730    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:45.284013    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:45.305258    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:45.305384    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:45.320562    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:45.320651    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:45.333130    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:45.333211    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:45.346874    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:45.346952    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:45.356577    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:45.356661    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:45.367693    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:45.367772    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:45.378396    5189 logs.go:276] 0 containers: []
	W0914 10:39:45.378407    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:45.378478    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:45.388723    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:45.388738    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:45.388743    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:45.422327    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:45.422335    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:45.436345    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:45.436357    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:45.447606    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:45.447616    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:45.459203    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:45.459216    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:45.482723    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:45.482730    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:45.494475    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:45.494486    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:45.498624    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:45.498633    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:45.533309    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:45.533322    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:45.548585    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:45.548597    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:45.560200    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:45.560211    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:45.575590    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:45.575599    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:45.586603    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:45.586613    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:48.105999    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:39:53.108568    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:39:53.109180    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:39:53.150712    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:39:53.150863    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:39:53.171491    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:39:53.171604    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:39:53.186357    5189 logs.go:276] 2 containers: [578d76dcac2e 07415dd7a640]
	I0914 10:39:53.186446    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:39:53.199021    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:39:53.199090    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:39:53.210366    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:39:53.210431    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:39:53.221392    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:39:53.221457    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:39:53.232221    5189 logs.go:276] 0 containers: []
	W0914 10:39:53.232233    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:39:53.232301    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:39:53.246903    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:39:53.246921    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:39:53.246927    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:39:53.264177    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:39:53.264188    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:39:53.278539    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:39:53.278551    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:39:53.290243    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:39:53.290251    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:39:53.301684    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:39:53.301692    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:39:53.320091    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:39:53.320106    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:39:53.331985    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:39:53.331994    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:39:53.336195    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:39:53.336201    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:39:53.371394    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:39:53.371409    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:39:53.383166    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:39:53.383177    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:39:53.398632    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:39:53.398642    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:39:53.422139    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:39:53.422148    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:39:53.456740    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:39:53.456749    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:39:55.973154    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:00.975651    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:00.976090    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:01.006941    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:01.007096    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:01.025496    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:01.025602    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:01.045372    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:01.045459    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:01.056382    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:01.056455    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:01.067316    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:01.067378    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:01.077615    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:01.077716    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:01.087515    5189 logs.go:276] 0 containers: []
	W0914 10:40:01.087530    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:01.087591    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:01.098097    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:01.098114    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:01.098120    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:01.133918    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:01.133928    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:01.149225    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:01.149237    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:01.160850    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:01.160864    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:01.185873    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:01.185881    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:01.196950    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:01.196961    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:01.201268    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:01.201275    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:01.236628    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:01.236638    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:01.251995    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:01.252010    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:01.263487    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:01.263499    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:01.275301    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:01.275311    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:01.292801    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:01.292810    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:01.303889    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:01.303900    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:01.315738    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:01.315748    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:01.329925    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:01.329934    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:03.842067    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:08.844260    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:08.844374    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:08.856690    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:08.856753    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:08.867194    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:08.867261    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:08.879150    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:08.879249    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:08.890251    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:08.890333    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:08.902085    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:08.902167    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:08.919147    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:08.919211    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:08.933888    5189 logs.go:276] 0 containers: []
	W0914 10:40:08.933900    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:08.933965    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:08.945057    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:08.945076    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:08.945081    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:08.957779    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:08.957790    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:08.962849    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:08.962860    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:08.978730    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:08.978739    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:08.991218    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:08.991231    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:09.030143    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:09.030155    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:09.045412    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:09.045430    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:09.057407    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:09.057419    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:09.069880    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:09.069893    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:09.082397    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:09.082410    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:09.108116    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:09.108134    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:09.143356    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:09.143375    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:09.156050    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:09.156062    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:09.172067    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:09.172081    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:09.184999    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:09.185012    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:11.705053    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:16.707579    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:16.707985    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:16.739046    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:16.739181    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:16.757079    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:16.757192    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:16.774618    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:16.774717    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:16.785707    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:16.785782    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:16.795552    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:16.795630    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:16.805932    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:16.806015    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:16.816197    5189 logs.go:276] 0 containers: []
	W0914 10:40:16.816211    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:16.816274    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:16.830700    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:16.830719    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:16.830724    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:16.845865    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:16.845874    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:16.857296    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:16.857309    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:16.868874    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:16.868884    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:16.880767    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:16.880780    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:16.891906    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:16.891915    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:16.903098    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:16.903110    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:16.907383    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:16.907389    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:16.922137    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:16.922150    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:16.939890    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:16.939901    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:16.973480    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:16.973489    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:16.990043    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:16.990052    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:17.001392    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:17.001406    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:17.012898    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:17.012910    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:17.037674    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:17.037683    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:19.582431    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:24.584639    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:24.585118    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:24.619260    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:24.619422    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:24.638994    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:24.639095    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:24.653631    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:24.653713    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:24.665634    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:24.665709    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:24.677618    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:24.677700    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:24.690864    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:24.690932    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:24.703650    5189 logs.go:276] 0 containers: []
	W0914 10:40:24.703666    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:24.703738    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:24.714106    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:24.714125    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:24.714130    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:24.726767    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:24.726780    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:24.738345    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:24.738354    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:24.758530    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:24.758540    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:24.775679    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:24.775690    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:24.802312    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:24.802323    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:24.837302    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:24.837309    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:24.851376    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:24.851388    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:24.865185    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:24.865198    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:24.876293    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:24.876304    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:24.894375    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:24.894387    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:24.906037    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:24.906049    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:24.911080    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:24.911089    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:24.923201    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:24.923214    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:24.959541    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:24.959555    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:27.473226    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:32.475841    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:32.476081    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:32.496369    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:32.496472    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:32.510204    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:32.510279    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:32.522925    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:32.523000    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:32.533906    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:32.533991    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:32.544221    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:32.544294    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:32.554314    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:32.554400    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:32.573564    5189 logs.go:276] 0 containers: []
	W0914 10:40:32.573577    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:32.573653    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:32.584265    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:32.584281    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:32.584287    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:32.598322    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:32.598334    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:32.610084    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:32.610096    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:32.621862    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:32.621872    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:32.647211    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:32.647218    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:32.659159    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:32.659167    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:32.674391    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:32.674399    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:32.692385    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:32.692399    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:32.709446    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:32.709460    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:32.730386    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:32.730401    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:32.745316    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:32.745329    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:32.782767    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:32.782781    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:32.795604    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:32.795617    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:32.808591    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:32.808602    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:32.845140    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:32.845157    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:35.352116    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:40.354251    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:40.354830    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:40.394163    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:40.394321    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:40.414769    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:40.414884    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:40.429979    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:40.430061    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:40.442810    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:40.442887    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:40.453735    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:40.453815    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:40.464111    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:40.464186    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:40.473879    5189 logs.go:276] 0 containers: []
	W0914 10:40:40.473890    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:40.473952    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:40.484410    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:40.484426    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:40.484432    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:40.488527    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:40.488534    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:40.527547    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:40.527558    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:40.552460    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:40.552466    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:40.565945    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:40.565956    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:40.577760    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:40.577773    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:40.589157    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:40.589169    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:40.601288    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:40.601301    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:40.637105    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:40.637116    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:40.652328    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:40.652338    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:40.681276    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:40.681285    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:40.701650    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:40.701660    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:40.718500    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:40.718511    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:40.730242    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:40.730251    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:40.742276    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:40.742292    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:43.255830    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:48.258097    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:48.258602    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:48.291058    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:48.291203    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:48.310220    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:48.310325    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:48.324470    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:48.324564    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:48.336019    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:48.336093    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:48.346539    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:48.346615    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:48.356891    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:48.356980    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:48.366773    5189 logs.go:276] 0 containers: []
	W0914 10:40:48.366787    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:48.366852    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:48.377156    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:48.377171    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:48.377177    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:48.390369    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:48.390378    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:48.408076    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:48.408086    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:48.419916    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:48.419925    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:48.452761    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:48.452769    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:48.468143    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:48.468155    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:48.479793    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:48.479805    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:48.494933    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:48.494942    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:48.518702    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:48.518713    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:48.532751    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:48.532763    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:48.545758    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:48.545770    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:48.557373    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:48.557386    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:48.561995    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:48.562003    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:48.596449    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:48.596461    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:48.612273    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:48.612284    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:51.127070    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:40:56.129711    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:40:56.130263    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:40:56.175782    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:40:56.175937    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:40:56.193457    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:40:56.193569    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:40:56.207503    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:40:56.207584    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:40:56.219938    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:40:56.220024    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:40:56.233647    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:40:56.233726    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:40:56.244413    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:40:56.244496    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:40:56.254377    5189 logs.go:276] 0 containers: []
	W0914 10:40:56.254393    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:40:56.254464    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:40:56.265649    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:40:56.265667    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:40:56.265673    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:40:56.304313    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:40:56.304327    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:40:56.317500    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:40:56.317515    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:40:56.322299    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:40:56.322309    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:40:56.334158    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:40:56.334170    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:40:56.346073    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:40:56.346085    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:40:56.361341    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:40:56.361350    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:40:56.373113    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:40:56.373124    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:40:56.386971    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:40:56.386980    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:40:56.400894    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:40:56.400903    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:40:56.424578    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:40:56.424585    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:40:56.437044    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:40:56.437054    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:40:56.470382    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:40:56.470390    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:40:56.482387    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:40:56.482399    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:40:56.499843    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:40:56.499852    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:40:59.016759    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:04.018414    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:04.018966    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:41:04.064861    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:41:04.065011    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:41:04.083701    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:41:04.083813    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:41:04.097816    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:41:04.097900    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:41:04.110122    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:41:04.110207    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:41:04.120951    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:41:04.121048    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:41:04.132584    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:41:04.132674    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:41:04.142444    5189 logs.go:276] 0 containers: []
	W0914 10:41:04.142458    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:41:04.142527    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:41:04.153111    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:41:04.153129    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:41:04.153134    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:41:04.167086    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:41:04.167098    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:41:04.178612    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:41:04.178625    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:41:04.202643    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:41:04.202652    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:41:04.206733    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:41:04.206741    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:41:04.241168    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:41:04.241182    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:41:04.256341    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:41:04.256353    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:41:04.267896    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:41:04.267909    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:41:04.279555    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:41:04.279567    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:41:04.297215    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:41:04.297227    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:41:04.309238    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:41:04.309261    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:41:04.343911    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:41:04.343918    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:41:04.355592    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:41:04.355605    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:41:04.373445    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:41:04.373455    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:41:04.385261    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:41:04.385271    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:41:06.898958    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:11.900904    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:11.901065    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:41:11.923100    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:41:11.923188    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:41:11.933876    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:41:11.933946    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:41:11.944913    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:41:11.944988    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:41:11.955266    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:41:11.955348    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:41:11.965391    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:41:11.965456    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:41:11.975808    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:41:11.975887    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:41:11.986038    5189 logs.go:276] 0 containers: []
	W0914 10:41:11.986054    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:41:11.986127    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:41:11.996560    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:41:11.996577    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:41:11.996583    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:41:12.031553    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:41:12.031563    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:41:12.045757    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:41:12.045766    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:41:12.057316    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:41:12.057326    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:41:12.069069    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:41:12.069078    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:41:12.080326    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:41:12.080335    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:41:12.097481    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:41:12.097492    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:41:12.102078    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:41:12.102086    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:41:12.116093    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:41:12.116103    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:41:12.136483    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:41:12.136496    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:41:12.151551    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:41:12.151563    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:41:12.176584    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:41:12.176594    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:41:12.188114    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:41:12.188127    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:41:12.221928    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:41:12.221937    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:41:12.234232    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:41:12.234244    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:41:14.747340    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:19.749610    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:19.749936    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:41:19.779254    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:41:19.779401    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:41:19.797396    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:41:19.797485    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:41:19.811329    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:41:19.811421    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:41:19.822514    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:41:19.822589    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:41:19.832583    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:41:19.832652    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:41:19.842946    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:41:19.843030    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:41:19.853563    5189 logs.go:276] 0 containers: []
	W0914 10:41:19.853574    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:41:19.853634    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:41:19.864290    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:41:19.864307    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:41:19.864316    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:41:19.876071    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:41:19.876082    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:41:19.894371    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:41:19.894384    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:41:19.912927    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:41:19.912947    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:41:19.965408    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:41:19.965423    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:41:19.977066    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:41:19.977079    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:41:19.988571    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:41:19.988582    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:41:20.000133    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:41:20.000146    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:41:20.023515    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:41:20.023522    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:41:20.034716    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:41:20.034725    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:41:20.045939    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:41:20.045950    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:41:20.081279    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:41:20.081290    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:41:20.085790    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:41:20.085797    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:41:20.102794    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:41:20.102808    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:41:20.116603    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:41:20.116613    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:41:22.632539    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:27.634603    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:27.635166    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:41:27.674086    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:41:27.674229    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:41:27.701431    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:41:27.701535    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:41:27.716770    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:41:27.716864    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:41:27.728527    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:41:27.728596    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:41:27.739548    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:41:27.739617    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:41:27.751019    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:41:27.751098    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:41:27.761333    5189 logs.go:276] 0 containers: []
	W0914 10:41:27.761348    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:41:27.761412    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:41:27.771977    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:41:27.771997    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:41:27.772003    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:41:27.806596    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:41:27.806604    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:41:27.841621    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:41:27.841634    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:41:27.853618    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:41:27.853631    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:41:27.864888    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:41:27.864898    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:41:27.876304    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:41:27.876317    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:41:27.887969    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:41:27.887984    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:41:27.909018    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:41:27.909029    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:41:27.924765    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:41:27.924775    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:41:27.929514    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:41:27.929522    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:41:27.943770    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:41:27.943781    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:41:27.955236    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:41:27.955246    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:41:27.978284    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:41:27.978294    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:41:27.991936    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:41:27.991946    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:41:28.003109    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:41:28.003121    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:41:30.517097    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:35.519602    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:35.519681    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0914 10:41:35.535591    5189 logs.go:276] 1 containers: [6578fb58a2c5]
	I0914 10:41:35.535673    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0914 10:41:35.546500    5189 logs.go:276] 1 containers: [33c10369677e]
	I0914 10:41:35.546572    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0914 10:41:35.557208    5189 logs.go:276] 4 containers: [e0dc42ed497c b782abdd0aa1 578d76dcac2e 07415dd7a640]
	I0914 10:41:35.557288    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0914 10:41:35.568677    5189 logs.go:276] 1 containers: [2b94b322a5cd]
	I0914 10:41:35.568736    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0914 10:41:35.580042    5189 logs.go:276] 1 containers: [5857464a690a]
	I0914 10:41:35.580109    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0914 10:41:35.591003    5189 logs.go:276] 1 containers: [73421ca00ae8]
	I0914 10:41:35.591074    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0914 10:41:35.601450    5189 logs.go:276] 0 containers: []
	W0914 10:41:35.601462    5189 logs.go:278] No container was found matching "kindnet"
	I0914 10:41:35.601510    5189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0914 10:41:35.612747    5189 logs.go:276] 1 containers: [7d720099f565]
	I0914 10:41:35.612766    5189 logs.go:123] Gathering logs for etcd [33c10369677e] ...
	I0914 10:41:35.612772    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33c10369677e"
	I0914 10:41:35.628343    5189 logs.go:123] Gathering logs for coredns [b782abdd0aa1] ...
	I0914 10:41:35.628359    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b782abdd0aa1"
	I0914 10:41:35.642668    5189 logs.go:123] Gathering logs for kube-apiserver [6578fb58a2c5] ...
	I0914 10:41:35.642682    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6578fb58a2c5"
	I0914 10:41:35.658797    5189 logs.go:123] Gathering logs for coredns [578d76dcac2e] ...
	I0914 10:41:35.658817    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d76dcac2e"
	I0914 10:41:35.671991    5189 logs.go:123] Gathering logs for coredns [07415dd7a640] ...
	I0914 10:41:35.672005    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07415dd7a640"
	I0914 10:41:35.690534    5189 logs.go:123] Gathering logs for kube-controller-manager [73421ca00ae8] ...
	I0914 10:41:35.690546    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73421ca00ae8"
	I0914 10:41:35.711739    5189 logs.go:123] Gathering logs for Docker ...
	I0914 10:41:35.711759    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0914 10:41:35.741102    5189 logs.go:123] Gathering logs for kubelet ...
	I0914 10:41:35.741119    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 10:41:35.776584    5189 logs.go:123] Gathering logs for kube-scheduler [2b94b322a5cd] ...
	I0914 10:41:35.776602    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b94b322a5cd"
	I0914 10:41:35.793628    5189 logs.go:123] Gathering logs for kube-proxy [5857464a690a] ...
	I0914 10:41:35.793639    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5857464a690a"
	I0914 10:41:35.805720    5189 logs.go:123] Gathering logs for container status ...
	I0914 10:41:35.805731    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 10:41:35.817894    5189 logs.go:123] Gathering logs for dmesg ...
	I0914 10:41:35.817905    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 10:41:35.822110    5189 logs.go:123] Gathering logs for describe nodes ...
	I0914 10:41:35.822117    5189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 10:41:35.857511    5189 logs.go:123] Gathering logs for coredns [e0dc42ed497c] ...
	I0914 10:41:35.857522    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0dc42ed497c"
	I0914 10:41:35.869726    5189 logs.go:123] Gathering logs for storage-provisioner [7d720099f565] ...
	I0914 10:41:35.869736    5189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d720099f565"
	I0914 10:41:38.386922    5189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0914 10:41:43.389015    5189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 10:41:43.404175    5189 out.go:201] 
	W0914 10:41:43.407172    5189 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0914 10:41:43.407178    5189 out.go:270] * 
	* 
	W0914 10:41:43.407751    5189 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:43.427964    5189 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.66s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-694000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-694000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.855482125s)

                                                
                                                
-- stdout --
	* [pause-694000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-694000" primary control-plane node in "pause-694000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-694000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-694000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-694000 -n pause-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-694000 -n pause-694000: exit status 7 (69.602292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 : exit status 80 (9.816957s)

                                                
                                                
-- stdout --
	* [NoKubernetes-993000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-993000" primary control-plane node in "NoKubernetes-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000: exit status 7 (64.306042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232696209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-993000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-993000
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000: exit status 7 (32.835584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232276708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-993000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-993000
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000: exit status 7 (57.261292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 : exit status 80 (5.242939959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-993000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-993000
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-993000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-993000 -n NoKubernetes-993000: exit status 7 (60.253042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.785697791s)

                                                
                                                
-- stdout --
	* [auto-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-029000" primary control-plane node in "auto-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:39:58.674559    5481 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:39:58.674689    5481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:39:58.674693    5481 out.go:358] Setting ErrFile to fd 2...
	I0914 10:39:58.674695    5481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:39:58.674842    5481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:39:58.675922    5481 out.go:352] Setting JSON to false
	I0914 10:39:58.692663    5481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4161,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:39:58.692734    5481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:39:58.699280    5481 out.go:177] * [auto-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:39:58.706197    5481 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:39:58.706304    5481 notify.go:220] Checking for updates...
	I0914 10:39:58.713146    5481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:39:58.716162    5481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:39:58.719221    5481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:39:58.720608    5481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:39:58.724180    5481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:39:58.727509    5481 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:39:58.727572    5481 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:39:58.727614    5481 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:39:58.732001    5481 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:39:58.739214    5481 start.go:297] selected driver: qemu2
	I0914 10:39:58.739220    5481 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:39:58.739226    5481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:39:58.741535    5481 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:39:58.744182    5481 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:39:58.747305    5481 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:39:58.747319    5481 cni.go:84] Creating CNI manager for ""
	I0914 10:39:58.747339    5481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:39:58.747345    5481 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:39:58.747372    5481 start.go:340] cluster config:
	{Name:auto-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:39:58.750851    5481 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:39:58.757187    5481 out.go:177] * Starting "auto-029000" primary control-plane node in "auto-029000" cluster
	I0914 10:39:58.761179    5481 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:39:58.761195    5481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:39:58.761206    5481 cache.go:56] Caching tarball of preloaded images
	I0914 10:39:58.761274    5481 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:39:58.761278    5481 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:39:58.761333    5481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/auto-029000/config.json ...
	I0914 10:39:58.761342    5481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/auto-029000/config.json: {Name:mkb4eaba5c99a33f3e831bbcc4d2dbed66a7a21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:39:58.761699    5481 start.go:360] acquireMachinesLock for auto-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:39:58.761729    5481 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "auto-029000"
	I0914 10:39:58.761740    5481 start.go:93] Provisioning new machine with config: &{Name:auto-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:39:58.761771    5481 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:39:58.770186    5481 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:39:58.785678    5481 start.go:159] libmachine.API.Create for "auto-029000" (driver="qemu2")
	I0914 10:39:58.785703    5481 client.go:168] LocalClient.Create starting
	I0914 10:39:58.785759    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:39:58.785793    5481 main.go:141] libmachine: Decoding PEM data...
	I0914 10:39:58.785802    5481 main.go:141] libmachine: Parsing certificate...
	I0914 10:39:58.785841    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:39:58.785866    5481 main.go:141] libmachine: Decoding PEM data...
	I0914 10:39:58.785879    5481 main.go:141] libmachine: Parsing certificate...
	I0914 10:39:58.786358    5481 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:39:58.943324    5481 main.go:141] libmachine: Creating SSH key...
	I0914 10:39:59.018327    5481 main.go:141] libmachine: Creating Disk image...
	I0914 10:39:59.018336    5481 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:39:59.018524    5481 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:39:59.028068    5481 main.go:141] libmachine: STDOUT: 
	I0914 10:39:59.028081    5481 main.go:141] libmachine: STDERR: 
	I0914 10:39:59.028145    5481 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2 +20000M
	I0914 10:39:59.036005    5481 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:39:59.036021    5481 main.go:141] libmachine: STDERR: 
	I0914 10:39:59.036037    5481 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:39:59.036043    5481 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:39:59.036057    5481 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:39:59.036083    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:7f:ae:4b:04:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:39:59.037740    5481 main.go:141] libmachine: STDOUT: 
	I0914 10:39:59.037753    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:39:59.037774    5481 client.go:171] duration metric: took 252.075209ms to LocalClient.Create
	I0914 10:40:01.039799    5481 start.go:128] duration metric: took 2.278116833s to createHost
	I0914 10:40:01.039813    5481 start.go:83] releasing machines lock for "auto-029000", held for 2.278176s
	W0914 10:40:01.039830    5481 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:01.048390    5481 out.go:177] * Deleting "auto-029000" in qemu2 ...
	W0914 10:40:01.064988    5481 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:01.064998    5481 start.go:729] Will try again in 5 seconds ...
	I0914 10:40:06.067046    5481 start.go:360] acquireMachinesLock for auto-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:06.067612    5481 start.go:364] duration metric: took 392.875µs to acquireMachinesLock for "auto-029000"
	I0914 10:40:06.067687    5481 start.go:93] Provisioning new machine with config: &{Name:auto-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:06.067964    5481 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:06.077435    5481 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:06.126429    5481 start.go:159] libmachine.API.Create for "auto-029000" (driver="qemu2")
	I0914 10:40:06.126489    5481 client.go:168] LocalClient.Create starting
	I0914 10:40:06.126619    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:06.126686    5481 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:06.126712    5481 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:06.126780    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:06.126830    5481 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:06.126840    5481 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:06.127362    5481 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:06.296060    5481 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:06.366416    5481 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:06.366422    5481 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:06.366615    5481 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:40:06.376506    5481 main.go:141] libmachine: STDOUT: 
	I0914 10:40:06.376525    5481 main.go:141] libmachine: STDERR: 
	I0914 10:40:06.376585    5481 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2 +20000M
	I0914 10:40:06.384860    5481 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:06.384876    5481 main.go:141] libmachine: STDERR: 
	I0914 10:40:06.384887    5481 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:40:06.384892    5481 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:06.384902    5481 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:06.384940    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c0:6e:53:7b:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/auto-029000/disk.qcow2
	I0914 10:40:06.386688    5481 main.go:141] libmachine: STDOUT: 
	I0914 10:40:06.386705    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:06.386717    5481 client.go:171] duration metric: took 260.232709ms to LocalClient.Create
	I0914 10:40:08.388771    5481 start.go:128] duration metric: took 2.320881917s to createHost
	I0914 10:40:08.388806    5481 start.go:83] releasing machines lock for "auto-029000", held for 2.321269417s
	W0914 10:40:08.389047    5481 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:08.398607    5481 out.go:201] 
	W0914 10:40:08.405628    5481 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:40:08.405688    5481 out.go:270] * 
	* 
	W0914 10:40:08.407079    5481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:40:08.417536    5481 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.949176542s)

                                                
                                                
-- stdout --
	* [calico-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-029000" primary control-plane node in "calico-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:40:10.612965    5593 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:40:10.613115    5593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:10.613118    5593 out.go:358] Setting ErrFile to fd 2...
	I0914 10:40:10.613120    5593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:10.613268    5593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:40:10.614452    5593 out.go:352] Setting JSON to false
	I0914 10:40:10.631347    5593 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4173,"bootTime":1726331437,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:40:10.631416    5593 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:40:10.638221    5593 out.go:177] * [calico-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:40:10.646912    5593 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:40:10.646961    5593 notify.go:220] Checking for updates...
	I0914 10:40:10.653067    5593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:40:10.654553    5593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:40:10.658109    5593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:40:10.661123    5593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:40:10.664118    5593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:40:10.667415    5593 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:40:10.667483    5593 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:40:10.667522    5593 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:40:10.672051    5593 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:40:10.679065    5593 start.go:297] selected driver: qemu2
	I0914 10:40:10.679072    5593 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:40:10.679083    5593 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:40:10.681329    5593 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:40:10.685027    5593 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:40:10.688251    5593 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:40:10.688282    5593 cni.go:84] Creating CNI manager for "calico"
	I0914 10:40:10.688292    5593 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0914 10:40:10.688325    5593 start.go:340] cluster config:
	{Name:calico-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:40:10.692100    5593 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:40:10.699070    5593 out.go:177] * Starting "calico-029000" primary control-plane node in "calico-029000" cluster
	I0914 10:40:10.702013    5593 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:40:10.702028    5593 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:40:10.702038    5593 cache.go:56] Caching tarball of preloaded images
	I0914 10:40:10.702102    5593 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:40:10.702108    5593 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:40:10.702162    5593 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/calico-029000/config.json ...
	I0914 10:40:10.702173    5593 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/calico-029000/config.json: {Name:mk9ef091be1f33aa10b3caa9d2beb24a994199c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:40:10.702388    5593 start.go:360] acquireMachinesLock for calico-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:10.702422    5593 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "calico-029000"
	I0914 10:40:10.702431    5593 start.go:93] Provisioning new machine with config: &{Name:calico-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:10.702456    5593 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:10.710937    5593 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:10.726737    5593 start.go:159] libmachine.API.Create for "calico-029000" (driver="qemu2")
	I0914 10:40:10.726774    5593 client.go:168] LocalClient.Create starting
	I0914 10:40:10.726846    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:10.726882    5593 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:10.726891    5593 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:10.726937    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:10.726960    5593 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:10.726969    5593 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:10.727305    5593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:10.887054    5593 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:10.942304    5593 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:10.942313    5593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:10.942560    5593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:10.951530    5593 main.go:141] libmachine: STDOUT: 
	I0914 10:40:10.951546    5593 main.go:141] libmachine: STDERR: 
	I0914 10:40:10.951603    5593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2 +20000M
	I0914 10:40:10.959394    5593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:10.959410    5593 main.go:141] libmachine: STDERR: 
	I0914 10:40:10.959432    5593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:10.959436    5593 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:10.959451    5593 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:10.959476    5593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:41:e7:e8:4e:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:10.961057    5593 main.go:141] libmachine: STDOUT: 
	I0914 10:40:10.961070    5593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:10.961092    5593 client.go:171] duration metric: took 234.320625ms to LocalClient.Create
	I0914 10:40:12.963110    5593 start.go:128] duration metric: took 2.26073525s to createHost
	I0914 10:40:12.963140    5593 start.go:83] releasing machines lock for "calico-029000", held for 2.260807625s
	W0914 10:40:12.963189    5593 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:12.967891    5593 out.go:177] * Deleting "calico-029000" in qemu2 ...
	W0914 10:40:13.002068    5593 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:13.002085    5593 start.go:729] Will try again in 5 seconds ...
	I0914 10:40:18.004086    5593 start.go:360] acquireMachinesLock for calico-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:18.004653    5593 start.go:364] duration metric: took 427.458µs to acquireMachinesLock for "calico-029000"
	I0914 10:40:18.004770    5593 start.go:93] Provisioning new machine with config: &{Name:calico-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:18.005045    5593 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:18.013559    5593 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:18.059859    5593 start.go:159] libmachine.API.Create for "calico-029000" (driver="qemu2")
	I0914 10:40:18.059925    5593 client.go:168] LocalClient.Create starting
	I0914 10:40:18.060033    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:18.060103    5593 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:18.060120    5593 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:18.060192    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:18.060237    5593 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:18.060256    5593 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:18.060825    5593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:18.229848    5593 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:18.454755    5593 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:18.454772    5593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:18.455019    5593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:18.464868    5593 main.go:141] libmachine: STDOUT: 
	I0914 10:40:18.464891    5593 main.go:141] libmachine: STDERR: 
	I0914 10:40:18.464956    5593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2 +20000M
	I0914 10:40:18.473209    5593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:18.473234    5593 main.go:141] libmachine: STDERR: 
	I0914 10:40:18.473248    5593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:18.473254    5593 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:18.473263    5593 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:18.473290    5593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d4:db:60:34:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/calico-029000/disk.qcow2
	I0914 10:40:18.475043    5593 main.go:141] libmachine: STDOUT: 
	I0914 10:40:18.475056    5593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:18.475068    5593 client.go:171] duration metric: took 415.155542ms to LocalClient.Create
	I0914 10:40:20.477055    5593 start.go:128] duration metric: took 2.472083958s to createHost
	I0914 10:40:20.477097    5593 start.go:83] releasing machines lock for "calico-029000", held for 2.47249275s
	W0914 10:40:20.477289    5593 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:20.497803    5593 out.go:201] 
	W0914 10:40:20.500743    5593 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:40:20.500765    5593 out.go:270] * 
	* 
	W0914 10:40:20.502078    5593 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:40:20.519743    5593 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.821811959s)

                                                
                                                
-- stdout --
	* [custom-flannel-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-029000" primary control-plane node in "custom-flannel-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:40:22.898427    5710 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:40:22.898547    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:22.898550    5710 out.go:358] Setting ErrFile to fd 2...
	I0914 10:40:22.898552    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:22.898714    5710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:40:22.899787    5710 out.go:352] Setting JSON to false
	I0914 10:40:22.916838    5710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4185,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:40:22.916914    5710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:40:22.922707    5710 out.go:177] * [custom-flannel-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:40:22.930601    5710 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:40:22.930639    5710 notify.go:220] Checking for updates...
	I0914 10:40:22.938491    5710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:40:22.941494    5710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:40:22.944530    5710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:40:22.946065    5710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:40:22.949510    5710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:40:22.952814    5710 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:40:22.952877    5710 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:40:22.952915    5710 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:40:22.957312    5710 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:40:22.964505    5710 start.go:297] selected driver: qemu2
	I0914 10:40:22.964511    5710 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:40:22.964518    5710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:40:22.966761    5710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:40:22.969595    5710 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:40:22.972575    5710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:40:22.972590    5710 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 10:40:22.972600    5710 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 10:40:22.972627    5710 start.go:340] cluster config:
	{Name:custom-flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:40:22.976178    5710 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:40:22.982484    5710 out.go:177] * Starting "custom-flannel-029000" primary control-plane node in "custom-flannel-029000" cluster
	I0914 10:40:22.986464    5710 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:40:22.986477    5710 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:40:22.986484    5710 cache.go:56] Caching tarball of preloaded images
	I0914 10:40:22.986541    5710 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:40:22.986546    5710 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:40:22.986594    5710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/custom-flannel-029000/config.json ...
	I0914 10:40:22.986604    5710 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/custom-flannel-029000/config.json: {Name:mkcd772f3738d828ed60d0a6b618b6fdb88332ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:40:22.986804    5710 start.go:360] acquireMachinesLock for custom-flannel-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:22.986833    5710 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "custom-flannel-029000"
	I0914 10:40:22.986844    5710 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:22.986866    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:22.994523    5710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:23.010293    5710 start.go:159] libmachine.API.Create for "custom-flannel-029000" (driver="qemu2")
	I0914 10:40:23.010320    5710 client.go:168] LocalClient.Create starting
	I0914 10:40:23.010388    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:23.010423    5710 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:23.010433    5710 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:23.010470    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:23.010494    5710 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:23.010501    5710 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:23.010852    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:23.173074    5710 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:23.290445    5710 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:23.290455    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:23.290634    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:23.300009    5710 main.go:141] libmachine: STDOUT: 
	I0914 10:40:23.300031    5710 main.go:141] libmachine: STDERR: 
	I0914 10:40:23.300089    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2 +20000M
	I0914 10:40:23.307963    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:23.307979    5710 main.go:141] libmachine: STDERR: 
	I0914 10:40:23.307998    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:23.308003    5710 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:23.308016    5710 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:23.308044    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ab:b8:54:cf:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:23.309609    5710 main.go:141] libmachine: STDOUT: 
	I0914 10:40:23.309626    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:23.309648    5710 client.go:171] duration metric: took 299.332125ms to LocalClient.Create
	I0914 10:40:25.311676    5710 start.go:128] duration metric: took 2.324892334s to createHost
	I0914 10:40:25.311724    5710 start.go:83] releasing machines lock for "custom-flannel-029000", held for 2.324981958s
	W0914 10:40:25.311755    5710 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:25.331742    5710 out.go:177] * Deleting "custom-flannel-029000" in qemu2 ...
	W0914 10:40:25.355654    5710 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:25.355670    5710 start.go:729] Will try again in 5 seconds ...
	I0914 10:40:30.356105    5710 start.go:360] acquireMachinesLock for custom-flannel-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:30.356727    5710 start.go:364] duration metric: took 531.084µs to acquireMachinesLock for "custom-flannel-029000"
	I0914 10:40:30.356882    5710 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:30.357235    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:30.362165    5710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:30.414021    5710 start.go:159] libmachine.API.Create for "custom-flannel-029000" (driver="qemu2")
	I0914 10:40:30.414093    5710 client.go:168] LocalClient.Create starting
	I0914 10:40:30.414222    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:30.414287    5710 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:30.414303    5710 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:30.414386    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:30.414431    5710 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:30.414447    5710 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:30.415077    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:30.589740    5710 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:30.637565    5710 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:30.637572    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:30.637768    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:30.647336    5710 main.go:141] libmachine: STDOUT: 
	I0914 10:40:30.647357    5710 main.go:141] libmachine: STDERR: 
	I0914 10:40:30.647427    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2 +20000M
	I0914 10:40:30.655823    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:30.655838    5710 main.go:141] libmachine: STDERR: 
	I0914 10:40:30.655859    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:30.655866    5710 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:30.655878    5710 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:30.655913    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:43:7b:36:de:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/custom-flannel-029000/disk.qcow2
	I0914 10:40:30.657631    5710 main.go:141] libmachine: STDOUT: 
	I0914 10:40:30.657645    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:30.657657    5710 client.go:171] duration metric: took 243.568459ms to LocalClient.Create
	I0914 10:40:32.658500    5710 start.go:128] duration metric: took 2.301347166s to createHost
	I0914 10:40:32.658525    5710 start.go:83] releasing machines lock for "custom-flannel-029000", held for 2.301873375s
	W0914 10:40:32.658612    5710 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:32.665812    5710 out.go:201] 
	W0914 10:40:32.672927    5710 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:40:32.672938    5710 out.go:270] * 
	* 
	W0914 10:40:32.673398    5710 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:40:32.680796    5710 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.844626834s)

                                                
                                                
-- stdout --
	* [false-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-029000" primary control-plane node in "false-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:40:35.079800    5831 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:40:35.079946    5831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:35.079950    5831 out.go:358] Setting ErrFile to fd 2...
	I0914 10:40:35.079953    5831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:35.080078    5831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:40:35.081197    5831 out.go:352] Setting JSON to false
	I0914 10:40:35.097740    5831 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4198,"bootTime":1726331437,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:40:35.097809    5831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:40:35.104572    5831 out.go:177] * [false-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:40:35.112394    5831 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:40:35.112419    5831 notify.go:220] Checking for updates...
	I0914 10:40:35.116309    5831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:40:35.119410    5831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:40:35.122354    5831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:40:35.129334    5831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:40:35.132362    5831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:40:35.135851    5831 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:40:35.135912    5831 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:40:35.135956    5831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:40:35.140339    5831 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:40:35.147381    5831 start.go:297] selected driver: qemu2
	I0914 10:40:35.147390    5831 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:40:35.147397    5831 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:40:35.149663    5831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:40:35.153371    5831 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:40:35.156482    5831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:40:35.156508    5831 cni.go:84] Creating CNI manager for "false"
	I0914 10:40:35.156537    5831 start.go:340] cluster config:
	{Name:false-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:40:35.160204    5831 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:40:35.169300    5831 out.go:177] * Starting "false-029000" primary control-plane node in "false-029000" cluster
	I0914 10:40:35.173303    5831 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:40:35.173315    5831 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:40:35.173322    5831 cache.go:56] Caching tarball of preloaded images
	I0914 10:40:35.173373    5831 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:40:35.173377    5831 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:40:35.173423    5831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/false-029000/config.json ...
	I0914 10:40:35.173434    5831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/false-029000/config.json: {Name:mk125cd2ea87d0bfebab3fe0d23c2742440edb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:40:35.173675    5831 start.go:360] acquireMachinesLock for false-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:35.173705    5831 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "false-029000"
	I0914 10:40:35.173715    5831 start.go:93] Provisioning new machine with config: &{Name:false-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:35.173747    5831 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:35.175414    5831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:35.190662    5831 start.go:159] libmachine.API.Create for "false-029000" (driver="qemu2")
	I0914 10:40:35.190687    5831 client.go:168] LocalClient.Create starting
	I0914 10:40:35.190748    5831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:35.190778    5831 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:35.190787    5831 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:35.190828    5831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:35.190851    5831 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:35.190861    5831 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:35.191196    5831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:35.363506    5831 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:35.395235    5831 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:35.395241    5831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:35.395419    5831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:35.404641    5831 main.go:141] libmachine: STDOUT: 
	I0914 10:40:35.404658    5831 main.go:141] libmachine: STDERR: 
	I0914 10:40:35.404715    5831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2 +20000M
	I0914 10:40:35.412562    5831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:35.412579    5831 main.go:141] libmachine: STDERR: 
	I0914 10:40:35.412597    5831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:35.412603    5831 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:35.412614    5831 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:35.412644    5831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:30:6a:fc:9d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:35.414319    5831 main.go:141] libmachine: STDOUT: 
	I0914 10:40:35.414333    5831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:35.414357    5831 client.go:171] duration metric: took 223.671125ms to LocalClient.Create
	I0914 10:40:37.416502    5831 start.go:128] duration metric: took 2.242812041s to createHost
	I0914 10:40:37.416578    5831 start.go:83] releasing machines lock for "false-029000", held for 2.242956625s
	W0914 10:40:37.416633    5831 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:37.427946    5831 out.go:177] * Deleting "false-029000" in qemu2 ...
	W0914 10:40:37.466233    5831 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:37.466261    5831 start.go:729] Will try again in 5 seconds ...
	I0914 10:40:42.468375    5831 start.go:360] acquireMachinesLock for false-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:42.469000    5831 start.go:364] duration metric: took 475.125µs to acquireMachinesLock for "false-029000"
	I0914 10:40:42.469176    5831 start.go:93] Provisioning new machine with config: &{Name:false-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:42.469468    5831 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:42.480094    5831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:42.518950    5831 start.go:159] libmachine.API.Create for "false-029000" (driver="qemu2")
	I0914 10:40:42.519006    5831 client.go:168] LocalClient.Create starting
	I0914 10:40:42.519101    5831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:42.519164    5831 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:42.519180    5831 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:42.519233    5831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:42.519272    5831 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:42.519283    5831 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:42.519726    5831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:42.687593    5831 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:42.833891    5831 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:42.833901    5831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:42.834095    5831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:42.849430    5831 main.go:141] libmachine: STDOUT: 
	I0914 10:40:42.849451    5831 main.go:141] libmachine: STDERR: 
	I0914 10:40:42.849507    5831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2 +20000M
	I0914 10:40:42.857693    5831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:42.857709    5831 main.go:141] libmachine: STDERR: 
	I0914 10:40:42.857722    5831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:42.857727    5831 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:42.857737    5831 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:42.857759    5831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:e2:0e:6c:62:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/false-029000/disk.qcow2
	I0914 10:40:42.859419    5831 main.go:141] libmachine: STDOUT: 
	I0914 10:40:42.859434    5831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:42.859447    5831 client.go:171] duration metric: took 340.4495ms to LocalClient.Create
	I0914 10:40:44.861552    5831 start.go:128] duration metric: took 2.392145583s to createHost
	I0914 10:40:44.861622    5831 start.go:83] releasing machines lock for "false-029000", held for 2.392673625s
	W0914 10:40:44.861937    5831 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:44.869558    5831 out.go:201] 
	W0914 10:40:44.871010    5831 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:40:44.871025    5831 out.go:270] * 
	* 
	W0914 10:40:44.872521    5831 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:40:44.883510    5831 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.808623708s)

                                                
                                                
-- stdout --
	* [kindnet-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-029000" primary control-plane node in "kindnet-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:40:47.097703    5945 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:40:47.097835    5945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:47.097838    5945 out.go:358] Setting ErrFile to fd 2...
	I0914 10:40:47.097841    5945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:47.097968    5945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:40:47.099042    5945 out.go:352] Setting JSON to false
	I0914 10:40:47.115832    5945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4210,"bootTime":1726331437,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:40:47.115901    5945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:40:47.122246    5945 out.go:177] * [kindnet-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:40:47.132020    5945 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:40:47.132058    5945 notify.go:220] Checking for updates...
	I0914 10:40:47.139994    5945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:40:47.143065    5945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:40:47.146033    5945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:40:47.149064    5945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:40:47.152007    5945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:40:47.155251    5945 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:40:47.155319    5945 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:40:47.155367    5945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:40:47.160048    5945 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:40:47.165963    5945 start.go:297] selected driver: qemu2
	I0914 10:40:47.165968    5945 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:40:47.165973    5945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:40:47.168218    5945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:40:47.179575    5945 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:40:47.183067    5945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:40:47.183081    5945 cni.go:84] Creating CNI manager for "kindnet"
	I0914 10:40:47.183084    5945 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 10:40:47.183110    5945 start.go:340] cluster config:
	{Name:kindnet-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:40:47.186591    5945 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:40:47.193975    5945 out.go:177] * Starting "kindnet-029000" primary control-plane node in "kindnet-029000" cluster
	I0914 10:40:47.198033    5945 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:40:47.198050    5945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:40:47.198062    5945 cache.go:56] Caching tarball of preloaded images
	I0914 10:40:47.198124    5945 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:40:47.198131    5945 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:40:47.198213    5945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kindnet-029000/config.json ...
	I0914 10:40:47.198224    5945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kindnet-029000/config.json: {Name:mk075758cbd80787836b5e8e819798284d3b023e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:40:47.198594    5945 start.go:360] acquireMachinesLock for kindnet-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:47.198623    5945 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "kindnet-029000"
	I0914 10:40:47.198632    5945 start.go:93] Provisioning new machine with config: &{Name:kindnet-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:47.198659    5945 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:47.207037    5945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:47.222185    5945 start.go:159] libmachine.API.Create for "kindnet-029000" (driver="qemu2")
	I0914 10:40:47.222211    5945 client.go:168] LocalClient.Create starting
	I0914 10:40:47.222271    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:47.222302    5945 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:47.222312    5945 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:47.222349    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:47.222379    5945 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:47.222392    5945 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:47.222727    5945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:47.385756    5945 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:47.460401    5945 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:47.460408    5945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:47.460595    5945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:47.470008    5945 main.go:141] libmachine: STDOUT: 
	I0914 10:40:47.470026    5945 main.go:141] libmachine: STDERR: 
	I0914 10:40:47.470086    5945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2 +20000M
	I0914 10:40:47.478017    5945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:47.478037    5945 main.go:141] libmachine: STDERR: 
	I0914 10:40:47.478055    5945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:47.478062    5945 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:47.478074    5945 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:47.478100    5945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:80:f8:25:97:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:47.479833    5945 main.go:141] libmachine: STDOUT: 
	I0914 10:40:47.479848    5945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:47.479870    5945 client.go:171] duration metric: took 257.663375ms to LocalClient.Create
	I0914 10:40:49.481998    5945 start.go:128] duration metric: took 2.2834025s to createHost
	I0914 10:40:49.482085    5945 start.go:83] releasing machines lock for "kindnet-029000", held for 2.283548958s
	W0914 10:40:49.482144    5945 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:49.496643    5945 out.go:177] * Deleting "kindnet-029000" in qemu2 ...
	W0914 10:40:49.536351    5945 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:49.536377    5945 start.go:729] Will try again in 5 seconds ...
	I0914 10:40:54.538491    5945 start.go:360] acquireMachinesLock for kindnet-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:54.539217    5945 start.go:364] duration metric: took 594µs to acquireMachinesLock for "kindnet-029000"
	I0914 10:40:54.539372    5945 start.go:93] Provisioning new machine with config: &{Name:kindnet-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:54.539753    5945 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:54.544383    5945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:54.596842    5945 start.go:159] libmachine.API.Create for "kindnet-029000" (driver="qemu2")
	I0914 10:40:54.596907    5945 client.go:168] LocalClient.Create starting
	I0914 10:40:54.597037    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:54.597108    5945 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:54.597130    5945 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:54.597192    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:54.597244    5945 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:54.597255    5945 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:54.597806    5945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:54.770824    5945 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:54.815022    5945 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:54.815032    5945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:54.815213    5945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:54.824525    5945 main.go:141] libmachine: STDOUT: 
	I0914 10:40:54.824542    5945 main.go:141] libmachine: STDERR: 
	I0914 10:40:54.824622    5945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2 +20000M
	I0914 10:40:54.832559    5945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:54.832575    5945 main.go:141] libmachine: STDERR: 
	I0914 10:40:54.832586    5945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:54.832593    5945 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:54.832613    5945 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:54.832639    5945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c6:63:6d:dc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kindnet-029000/disk.qcow2
	I0914 10:40:54.834285    5945 main.go:141] libmachine: STDOUT: 
	I0914 10:40:54.834302    5945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:54.834315    5945 client.go:171] duration metric: took 237.413125ms to LocalClient.Create
	I0914 10:40:56.836329    5945 start.go:128] duration metric: took 2.296654542s to createHost
	I0914 10:40:56.836402    5945 start.go:83] releasing machines lock for "kindnet-029000", held for 2.297256375s
	W0914 10:40:56.836539    5945 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:40:56.845750    5945 out.go:201] 
	W0914 10:40:56.856945    5945 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:40:56.856955    5945 out.go:270] * 
	* 
	W0914 10:40:56.857872    5945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:40:56.865807    5945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.852659167s)

                                                
                                                
-- stdout --
	* [flannel-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-029000" primary control-plane node in "flannel-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:40:59.189937    6060 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:40:59.190091    6060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:59.190094    6060 out.go:358] Setting ErrFile to fd 2...
	I0914 10:40:59.190096    6060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:40:59.190220    6060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:40:59.191304    6060 out.go:352] Setting JSON to false
	I0914 10:40:59.207998    6060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4222,"bootTime":1726331437,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:40:59.208072    6060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:40:59.214185    6060 out.go:177] * [flannel-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:40:59.222037    6060 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:40:59.222079    6060 notify.go:220] Checking for updates...
	I0914 10:40:59.229979    6060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:40:59.233000    6060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:40:59.235975    6060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:40:59.238928    6060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:40:59.241983    6060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:40:59.245259    6060 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:40:59.245325    6060 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:40:59.245368    6060 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:40:59.249954    6060 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:40:59.255896    6060 start.go:297] selected driver: qemu2
	I0914 10:40:59.255903    6060 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:40:59.255924    6060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:40:59.258202    6060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:40:59.260989    6060 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:40:59.264070    6060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:40:59.264084    6060 cni.go:84] Creating CNI manager for "flannel"
	I0914 10:40:59.264087    6060 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0914 10:40:59.264117    6060 start.go:340] cluster config:
	{Name:flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:40:59.267601    6060 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:40:59.274969    6060 out.go:177] * Starting "flannel-029000" primary control-plane node in "flannel-029000" cluster
	I0914 10:40:59.279010    6060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:40:59.279026    6060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:40:59.279040    6060 cache.go:56] Caching tarball of preloaded images
	I0914 10:40:59.279107    6060 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:40:59.279112    6060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:40:59.279183    6060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/flannel-029000/config.json ...
	I0914 10:40:59.279194    6060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/flannel-029000/config.json: {Name:mk13c4939816fa1e40b79991b3e308560b59f1e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:40:59.279605    6060 start.go:360] acquireMachinesLock for flannel-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:40:59.279636    6060 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "flannel-029000"
	I0914 10:40:59.279646    6060 start.go:93] Provisioning new machine with config: &{Name:flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:40:59.279670    6060 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:40:59.284054    6060 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:40:59.299912    6060 start.go:159] libmachine.API.Create for "flannel-029000" (driver="qemu2")
	I0914 10:40:59.299938    6060 client.go:168] LocalClient.Create starting
	I0914 10:40:59.299996    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:40:59.300026    6060 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:59.300036    6060 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:59.300070    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:40:59.300092    6060 main.go:141] libmachine: Decoding PEM data...
	I0914 10:40:59.300104    6060 main.go:141] libmachine: Parsing certificate...
	I0914 10:40:59.300468    6060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:40:59.463171    6060 main.go:141] libmachine: Creating SSH key...
	I0914 10:40:59.546964    6060 main.go:141] libmachine: Creating Disk image...
	I0914 10:40:59.546970    6060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:40:59.547150    6060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:40:59.556361    6060 main.go:141] libmachine: STDOUT: 
	I0914 10:40:59.556378    6060 main.go:141] libmachine: STDERR: 
	I0914 10:40:59.556443    6060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2 +20000M
	I0914 10:40:59.564255    6060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:40:59.564269    6060 main.go:141] libmachine: STDERR: 
	I0914 10:40:59.564291    6060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:40:59.564297    6060 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:40:59.564316    6060 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:40:59.564341    6060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:6f:73:65:ad:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:40:59.566033    6060 main.go:141] libmachine: STDOUT: 
	I0914 10:40:59.566051    6060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:40:59.566076    6060 client.go:171] duration metric: took 266.142875ms to LocalClient.Create
	I0914 10:41:01.568263    6060 start.go:128] duration metric: took 2.288662708s to createHost
	I0914 10:41:01.568347    6060 start.go:83] releasing machines lock for "flannel-029000", held for 2.288796458s
	W0914 10:41:01.568438    6060 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:01.579631    6060 out.go:177] * Deleting "flannel-029000" in qemu2 ...
	W0914 10:41:01.624642    6060 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:01.624672    6060 start.go:729] Will try again in 5 seconds ...
	I0914 10:41:06.626616    6060 start.go:360] acquireMachinesLock for flannel-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:06.627101    6060 start.go:364] duration metric: took 412.042µs to acquireMachinesLock for "flannel-029000"
	I0914 10:41:06.627175    6060 start.go:93] Provisioning new machine with config: &{Name:flannel-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:06.627425    6060 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:06.636621    6060 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:06.678883    6060 start.go:159] libmachine.API.Create for "flannel-029000" (driver="qemu2")
	I0914 10:41:06.678931    6060 client.go:168] LocalClient.Create starting
	I0914 10:41:06.679044    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:06.679106    6060 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:06.679123    6060 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:06.679189    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:06.679229    6060 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:06.679238    6060 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:06.679782    6060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:06.849110    6060 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:06.945561    6060 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:06.945567    6060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:06.945747    6060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:41:06.955346    6060 main.go:141] libmachine: STDOUT: 
	I0914 10:41:06.955372    6060 main.go:141] libmachine: STDERR: 
	I0914 10:41:06.955432    6060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2 +20000M
	I0914 10:41:06.963406    6060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:06.963429    6060 main.go:141] libmachine: STDERR: 
	I0914 10:41:06.963444    6060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:41:06.963450    6060 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:06.963458    6060 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:06.963483    6060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5b:11:82:8c:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/flannel-029000/disk.qcow2
	I0914 10:41:06.965087    6060 main.go:141] libmachine: STDOUT: 
	I0914 10:41:06.965111    6060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:06.965125    6060 client.go:171] duration metric: took 286.201541ms to LocalClient.Create
	I0914 10:41:08.967152    6060 start.go:128] duration metric: took 2.339805041s to createHost
	I0914 10:41:08.967215    6060 start.go:83] releasing machines lock for "flannel-029000", held for 2.3401715s
	W0914 10:41:08.967425    6060 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:08.976791    6060 out.go:201] 
	W0914 10:41:08.988871    6060 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:41:08.988887    6060 out.go:270] * 
	* 
	W0914 10:41:08.990362    6060 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:09.000786    6060 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.908444583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-029000" primary control-plane node in "enable-default-cni-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:41:11.420269    6177 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:41:11.420395    6177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:11.420398    6177 out.go:358] Setting ErrFile to fd 2...
	I0914 10:41:11.420401    6177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:11.420527    6177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:41:11.421652    6177 out.go:352] Setting JSON to false
	I0914 10:41:11.438244    6177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4234,"bootTime":1726331437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:41:11.438319    6177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:41:11.444370    6177 out.go:177] * [enable-default-cni-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:41:11.453217    6177 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:41:11.453276    6177 notify.go:220] Checking for updates...
	I0914 10:41:11.459106    6177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:41:11.462144    6177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:41:11.465132    6177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:41:11.468145    6177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:41:11.471135    6177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:41:11.474460    6177 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:41:11.474527    6177 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:41:11.474577    6177 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:41:11.479121    6177 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:41:11.486113    6177 start.go:297] selected driver: qemu2
	I0914 10:41:11.486121    6177 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:41:11.486128    6177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:41:11.488805    6177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:41:11.493117    6177 out.go:177] * Automatically selected the socket_vmnet network
	E0914 10:41:11.496243    6177 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0914 10:41:11.496259    6177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:41:11.496283    6177 cni.go:84] Creating CNI manager for "bridge"
	I0914 10:41:11.496288    6177 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:41:11.496338    6177 start.go:340] cluster config:
	{Name:enable-default-cni-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:41:11.500095    6177 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:41:11.506099    6177 out.go:177] * Starting "enable-default-cni-029000" primary control-plane node in "enable-default-cni-029000" cluster
	I0914 10:41:11.510131    6177 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:41:11.510143    6177 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:41:11.510154    6177 cache.go:56] Caching tarball of preloaded images
	I0914 10:41:11.510214    6177 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:41:11.510220    6177 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:41:11.510284    6177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/enable-default-cni-029000/config.json ...
	I0914 10:41:11.510295    6177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/enable-default-cni-029000/config.json: {Name:mkc48e78450d923a7cd4a8c3aed9cfbc19082e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:41:11.510510    6177 start.go:360] acquireMachinesLock for enable-default-cni-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:11.510544    6177 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "enable-default-cni-029000"
	I0914 10:41:11.510555    6177 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:11.510578    6177 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:11.518094    6177 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:11.533938    6177 start.go:159] libmachine.API.Create for "enable-default-cni-029000" (driver="qemu2")
	I0914 10:41:11.533967    6177 client.go:168] LocalClient.Create starting
	I0914 10:41:11.534036    6177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:11.534089    6177 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:11.534099    6177 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:11.534120    6177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:11.534144    6177 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:11.534151    6177 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:11.534519    6177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:11.696767    6177 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:11.780017    6177 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:11.780027    6177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:11.780211    6177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:11.789436    6177 main.go:141] libmachine: STDOUT: 
	I0914 10:41:11.789457    6177 main.go:141] libmachine: STDERR: 
	I0914 10:41:11.789516    6177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2 +20000M
	I0914 10:41:11.797507    6177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:11.797522    6177 main.go:141] libmachine: STDERR: 
	I0914 10:41:11.797540    6177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:11.797546    6177 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:11.797557    6177 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:11.797588    6177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:1f:25:dd:97:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:11.799264    6177 main.go:141] libmachine: STDOUT: 
	I0914 10:41:11.799282    6177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:11.799305    6177 client.go:171] duration metric: took 265.342375ms to LocalClient.Create
	I0914 10:41:13.801446    6177 start.go:128] duration metric: took 2.290932083s to createHost
	I0914 10:41:13.801538    6177 start.go:83] releasing machines lock for "enable-default-cni-029000", held for 2.291079667s
	W0914 10:41:13.801665    6177 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:13.813298    6177 out.go:177] * Deleting "enable-default-cni-029000" in qemu2 ...
	W0914 10:41:13.846688    6177 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:13.846717    6177 start.go:729] Will try again in 5 seconds ...
	I0914 10:41:18.848755    6177 start.go:360] acquireMachinesLock for enable-default-cni-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:18.849351    6177 start.go:364] duration metric: took 465.666µs to acquireMachinesLock for "enable-default-cni-029000"
	I0914 10:41:18.849508    6177 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:18.849835    6177 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:18.855370    6177 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:18.905885    6177 start.go:159] libmachine.API.Create for "enable-default-cni-029000" (driver="qemu2")
	I0914 10:41:18.905944    6177 client.go:168] LocalClient.Create starting
	I0914 10:41:18.906080    6177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:18.906152    6177 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:18.906169    6177 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:18.906261    6177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:18.906307    6177 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:18.906322    6177 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:18.907014    6177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:19.078229    6177 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:19.233350    6177 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:19.233363    6177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:19.233574    6177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:19.243836    6177 main.go:141] libmachine: STDOUT: 
	I0914 10:41:19.243855    6177 main.go:141] libmachine: STDERR: 
	I0914 10:41:19.243924    6177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2 +20000M
	I0914 10:41:19.252061    6177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:19.252082    6177 main.go:141] libmachine: STDERR: 
	I0914 10:41:19.252093    6177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:19.252106    6177 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:19.252116    6177 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:19.252143    6177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2b:99:31:24:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/enable-default-cni-029000/disk.qcow2
	I0914 10:41:19.253916    6177 main.go:141] libmachine: STDOUT: 
	I0914 10:41:19.253929    6177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:19.253943    6177 client.go:171] duration metric: took 348.008833ms to LocalClient.Create
	I0914 10:41:21.256021    6177 start.go:128] duration metric: took 2.406259125s to createHost
	I0914 10:41:21.256084    6177 start.go:83] releasing machines lock for "enable-default-cni-029000", held for 2.406810583s
	W0914 10:41:21.256365    6177 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:21.265946    6177 out.go:201] 
	W0914 10:41:21.275126    6177 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:41:21.275181    6177 out.go:270] * 
	* 
	W0914 10:41:21.277407    6177 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:21.290976    6177 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.819807458s)

                                                
                                                
-- stdout --
	* [bridge-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-029000" primary control-plane node in "bridge-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:41:23.496668    6292 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:41:23.496836    6292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:23.496840    6292 out.go:358] Setting ErrFile to fd 2...
	I0914 10:41:23.496842    6292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:23.496973    6292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:41:23.498026    6292 out.go:352] Setting JSON to false
	I0914 10:41:23.514378    6292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4246,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:41:23.514446    6292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:41:23.521492    6292 out.go:177] * [bridge-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:41:23.529396    6292 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:41:23.529469    6292 notify.go:220] Checking for updates...
	I0914 10:41:23.540437    6292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:41:23.543341    6292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:41:23.546372    6292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:41:23.549389    6292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:41:23.550766    6292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:41:23.553662    6292 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:41:23.553736    6292 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:41:23.553788    6292 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:41:23.557405    6292 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:41:23.563343    6292 start.go:297] selected driver: qemu2
	I0914 10:41:23.563349    6292 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:41:23.563355    6292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:41:23.565768    6292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:41:23.568359    6292 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:41:23.571436    6292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:41:23.571453    6292 cni.go:84] Creating CNI manager for "bridge"
	I0914 10:41:23.571460    6292 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:41:23.571492    6292 start.go:340] cluster config:
	{Name:bridge-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:41:23.575189    6292 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:41:23.582432    6292 out.go:177] * Starting "bridge-029000" primary control-plane node in "bridge-029000" cluster
	I0914 10:41:23.586339    6292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:41:23.586352    6292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:41:23.586363    6292 cache.go:56] Caching tarball of preloaded images
	I0914 10:41:23.586413    6292 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:41:23.586418    6292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:41:23.586466    6292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/bridge-029000/config.json ...
	I0914 10:41:23.586476    6292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/bridge-029000/config.json: {Name:mk0d929dabb5f653ea99354369a42e1dfbfea1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:41:23.586862    6292 start.go:360] acquireMachinesLock for bridge-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:23.586892    6292 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "bridge-029000"
	I0914 10:41:23.586900    6292 start.go:93] Provisioning new machine with config: &{Name:bridge-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:23.586923    6292 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:23.590433    6292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:23.606233    6292 start.go:159] libmachine.API.Create for "bridge-029000" (driver="qemu2")
	I0914 10:41:23.606256    6292 client.go:168] LocalClient.Create starting
	I0914 10:41:23.606313    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:23.606343    6292 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:23.606355    6292 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:23.606391    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:23.606414    6292 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:23.606422    6292 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:23.606898    6292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:23.769898    6292 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:23.806391    6292 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:23.806397    6292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:23.806557    6292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:23.815936    6292 main.go:141] libmachine: STDOUT: 
	I0914 10:41:23.815957    6292 main.go:141] libmachine: STDERR: 
	I0914 10:41:23.816009    6292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2 +20000M
	I0914 10:41:23.824220    6292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:23.824235    6292 main.go:141] libmachine: STDERR: 
	I0914 10:41:23.824249    6292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:23.824254    6292 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:23.824266    6292 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:23.824298    6292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:07:83:09:88:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:23.826012    6292 main.go:141] libmachine: STDOUT: 
	I0914 10:41:23.826026    6292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:23.826051    6292 client.go:171] duration metric: took 219.797917ms to LocalClient.Create
	I0914 10:41:25.828203    6292 start.go:128] duration metric: took 2.241346375s to createHost
	I0914 10:41:25.828278    6292 start.go:83] releasing machines lock for "bridge-029000", held for 2.241473083s
	W0914 10:41:25.828343    6292 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:25.844650    6292 out.go:177] * Deleting "bridge-029000" in qemu2 ...
	W0914 10:41:25.877525    6292 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:25.877550    6292 start.go:729] Will try again in 5 seconds ...
	I0914 10:41:30.879541    6292 start.go:360] acquireMachinesLock for bridge-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:30.880071    6292 start.go:364] duration metric: took 437.583µs to acquireMachinesLock for "bridge-029000"
	I0914 10:41:30.880145    6292 start.go:93] Provisioning new machine with config: &{Name:bridge-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:30.880609    6292 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:30.888200    6292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:30.938525    6292 start.go:159] libmachine.API.Create for "bridge-029000" (driver="qemu2")
	I0914 10:41:30.938579    6292 client.go:168] LocalClient.Create starting
	I0914 10:41:30.938715    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:30.938783    6292 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:30.938804    6292 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:30.938865    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:30.938909    6292 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:30.938926    6292 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:30.939684    6292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:31.109832    6292 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:31.224246    6292 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:31.224256    6292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:31.224441    6292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:31.234044    6292 main.go:141] libmachine: STDOUT: 
	I0914 10:41:31.234061    6292 main.go:141] libmachine: STDERR: 
	I0914 10:41:31.234125    6292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2 +20000M
	I0914 10:41:31.242500    6292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:31.242517    6292 main.go:141] libmachine: STDERR: 
	I0914 10:41:31.242537    6292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:31.242545    6292 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:31.242570    6292 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:31.242605    6292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:f4:9f:ec:12:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/bridge-029000/disk.qcow2
	I0914 10:41:31.244388    6292 main.go:141] libmachine: STDOUT: 
	I0914 10:41:31.244404    6292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:31.244426    6292 client.go:171] duration metric: took 305.8535ms to LocalClient.Create
	I0914 10:41:33.246475    6292 start.go:128] duration metric: took 2.365936667s to createHost
	I0914 10:41:33.246528    6292 start.go:83] releasing machines lock for "bridge-029000", held for 2.366527958s
	W0914 10:41:33.246806    6292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:33.254253    6292 out.go:201] 
	W0914 10:41:33.265464    6292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:41:33.265500    6292 out.go:270] * 
	* 
	W0914 10:41:33.267161    6292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:33.275252    6292 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0914 10:41:36.630693    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-029000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.907220375s)

                                                
                                                
-- stdout --
	* [kubenet-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-029000" primary control-plane node in "kubenet-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:41:35.504724    6401 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:41:35.504866    6401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:35.504872    6401 out.go:358] Setting ErrFile to fd 2...
	I0914 10:41:35.504875    6401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:35.504984    6401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:41:35.506075    6401 out.go:352] Setting JSON to false
	I0914 10:41:35.523029    6401 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4258,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:41:35.523119    6401 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:41:35.528269    6401 out.go:177] * [kubenet-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:41:35.536151    6401 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:41:35.536263    6401 notify.go:220] Checking for updates...
	I0914 10:41:35.543098    6401 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:41:35.546105    6401 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:41:35.549114    6401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:41:35.552124    6401 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:41:35.555120    6401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:41:35.558393    6401 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:41:35.558465    6401 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:41:35.558509    6401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:41:35.562092    6401 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:41:35.568069    6401 start.go:297] selected driver: qemu2
	I0914 10:41:35.568078    6401 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:41:35.568084    6401 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:41:35.570772    6401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:41:35.574086    6401 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:41:35.577177    6401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:41:35.577196    6401 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0914 10:41:35.577233    6401 start.go:340] cluster config:
	{Name:kubenet-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:41:35.581574    6401 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:41:35.589113    6401 out.go:177] * Starting "kubenet-029000" primary control-plane node in "kubenet-029000" cluster
	I0914 10:41:35.593116    6401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:41:35.593149    6401 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:41:35.593163    6401 cache.go:56] Caching tarball of preloaded images
	I0914 10:41:35.593260    6401 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:41:35.593267    6401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:41:35.593326    6401 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kubenet-029000/config.json ...
	I0914 10:41:35.593337    6401 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/kubenet-029000/config.json: {Name:mk742e33352c85edc39cfff0c1d8d1509871d022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:41:35.593590    6401 start.go:360] acquireMachinesLock for kubenet-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:35.593621    6401 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "kubenet-029000"
	I0914 10:41:35.593630    6401 start.go:93] Provisioning new machine with config: &{Name:kubenet-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:35.593676    6401 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:35.597122    6401 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:35.613859    6401 start.go:159] libmachine.API.Create for "kubenet-029000" (driver="qemu2")
	I0914 10:41:35.613889    6401 client.go:168] LocalClient.Create starting
	I0914 10:41:35.613961    6401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:35.613994    6401 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:35.614004    6401 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:35.614043    6401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:35.614067    6401 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:35.614077    6401 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:35.614422    6401 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:35.777704    6401 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:35.864642    6401 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:35.864652    6401 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:35.864863    6401 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:35.875419    6401 main.go:141] libmachine: STDOUT: 
	I0914 10:41:35.875453    6401 main.go:141] libmachine: STDERR: 
	I0914 10:41:35.875528    6401 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2 +20000M
	I0914 10:41:35.884728    6401 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:35.884767    6401 main.go:141] libmachine: STDERR: 
	I0914 10:41:35.884788    6401 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:35.884793    6401 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:35.884805    6401 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:35.884834    6401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:36:94:ca:d1:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:35.886871    6401 main.go:141] libmachine: STDOUT: 
	I0914 10:41:35.886886    6401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:35.886909    6401 client.go:171] duration metric: took 273.025ms to LocalClient.Create
	I0914 10:41:37.889082    6401 start.go:128] duration metric: took 2.295473042s to createHost
	I0914 10:41:37.889174    6401 start.go:83] releasing machines lock for "kubenet-029000", held for 2.295639875s
	W0914 10:41:37.889228    6401 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:37.900387    6401 out.go:177] * Deleting "kubenet-029000" in qemu2 ...
	W0914 10:41:37.932787    6401 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:37.932814    6401 start.go:729] Will try again in 5 seconds ...
	I0914 10:41:42.934812    6401 start.go:360] acquireMachinesLock for kubenet-029000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:42.935141    6401 start.go:364] duration metric: took 260.958µs to acquireMachinesLock for "kubenet-029000"
	I0914 10:41:42.935212    6401 start.go:93] Provisioning new machine with config: &{Name:kubenet-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:42.935303    6401 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:42.943671    6401 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 10:41:42.979693    6401 start.go:159] libmachine.API.Create for "kubenet-029000" (driver="qemu2")
	I0914 10:41:42.979731    6401 client.go:168] LocalClient.Create starting
	I0914 10:41:42.979836    6401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:42.979910    6401 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:42.979927    6401 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:42.979987    6401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:42.980029    6401 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:42.980040    6401 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:42.980782    6401 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:43.147938    6401 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:43.322856    6401 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:43.322865    6401 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:43.323103    6401 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:43.333009    6401 main.go:141] libmachine: STDOUT: 
	I0914 10:41:43.333029    6401 main.go:141] libmachine: STDERR: 
	I0914 10:41:43.333083    6401 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2 +20000M
	I0914 10:41:43.341110    6401 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:43.341124    6401 main.go:141] libmachine: STDERR: 
	I0914 10:41:43.341137    6401 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:43.341146    6401 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:43.341154    6401 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:43.341186    6401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b6:fc:80:18:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/kubenet-029000/disk.qcow2
	I0914 10:41:43.342875    6401 main.go:141] libmachine: STDOUT: 
	I0914 10:41:43.342892    6401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:43.342904    6401 client.go:171] duration metric: took 363.182417ms to LocalClient.Create
	I0914 10:41:45.345019    6401 start.go:128] duration metric: took 2.409781583s to createHost
	I0914 10:41:45.345106    6401 start.go:83] releasing machines lock for "kubenet-029000", held for 2.410046959s
	W0914 10:41:45.345474    6401 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:45.357032    6401 out.go:201] 
	W0914 10:41:45.361179    6401 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:41:45.361196    6401 out.go:270] * 
	* 
	W0914 10:41:45.362829    6401 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:45.371079    6401 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0914 10:41:47.819753    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.27145225s)

                                                
                                                
-- stdout --
	* [old-k8s-version-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-661000" primary control-plane node in "old-k8s-version-661000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-661000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:41:47.588863    6517 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:41:47.589029    6517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:47.589033    6517 out.go:358] Setting ErrFile to fd 2...
	I0914 10:41:47.589035    6517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:41:47.589157    6517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:41:47.590194    6517 out.go:352] Setting JSON to false
	I0914 10:41:47.606467    6517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4270,"bootTime":1726331437,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:41:47.606539    6517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:41:47.612805    6517 out.go:177] * [old-k8s-version-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:41:47.620737    6517 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:41:47.620771    6517 notify.go:220] Checking for updates...
	I0914 10:41:47.627767    6517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:41:47.630746    6517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:41:47.633698    6517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:41:47.636737    6517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:41:47.639635    6517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:41:47.643013    6517 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:41:47.643081    6517 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:41:47.643127    6517 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:41:47.647705    6517 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:41:47.654711    6517 start.go:297] selected driver: qemu2
	I0914 10:41:47.654722    6517 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:41:47.654730    6517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:41:47.657162    6517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:41:47.659701    6517 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:41:47.661034    6517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:41:47.661051    6517 cni.go:84] Creating CNI manager for ""
	I0914 10:41:47.661070    6517 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 10:41:47.661096    6517 start.go:340] cluster config:
	{Name:old-k8s-version-661000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:41:47.664658    6517 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:41:47.671763    6517 out.go:177] * Starting "old-k8s-version-661000" primary control-plane node in "old-k8s-version-661000" cluster
	I0914 10:41:47.675694    6517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 10:41:47.675707    6517 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 10:41:47.675715    6517 cache.go:56] Caching tarball of preloaded images
	I0914 10:41:47.675767    6517 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:41:47.675772    6517 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 10:41:47.675821    6517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/old-k8s-version-661000/config.json ...
	I0914 10:41:47.675831    6517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/old-k8s-version-661000/config.json: {Name:mkd016ac178d4c18f8617736f9dfb047afa2a534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:41:47.676035    6517 start.go:360] acquireMachinesLock for old-k8s-version-661000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:47.676067    6517 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "old-k8s-version-661000"
	I0914 10:41:47.676077    6517 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:47.676104    6517 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:47.683679    6517 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:41:47.699507    6517 start.go:159] libmachine.API.Create for "old-k8s-version-661000" (driver="qemu2")
	I0914 10:41:47.699543    6517 client.go:168] LocalClient.Create starting
	I0914 10:41:47.699610    6517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:47.699642    6517 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:47.699652    6517 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:47.699688    6517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:47.699710    6517 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:47.699718    6517 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:47.700053    6517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:47.897102    6517 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:48.240828    6517 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:48.240838    6517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:48.241038    6517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:48.251472    6517 main.go:141] libmachine: STDOUT: 
	I0914 10:41:48.251509    6517 main.go:141] libmachine: STDERR: 
	I0914 10:41:48.251582    6517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2 +20000M
	I0914 10:41:48.260058    6517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:48.260079    6517 main.go:141] libmachine: STDERR: 
	I0914 10:41:48.260099    6517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:48.260104    6517 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:48.260117    6517 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:48.260144    6517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:1c:92:66:7d:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:48.261984    6517 main.go:141] libmachine: STDOUT: 
	I0914 10:41:48.262006    6517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:48.262033    6517 client.go:171] duration metric: took 562.505583ms to LocalClient.Create
	I0914 10:41:50.262678    6517 start.go:128] duration metric: took 2.586669791s to createHost
	I0914 10:41:50.262711    6517 start.go:83] releasing machines lock for "old-k8s-version-661000", held for 2.586746792s
	W0914 10:41:50.262745    6517 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:50.273960    6517 out.go:177] * Deleting "old-k8s-version-661000" in qemu2 ...
	W0914 10:41:50.304353    6517 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:50.304371    6517 start.go:729] Will try again in 5 seconds ...
	I0914 10:41:55.306433    6517 start.go:360] acquireMachinesLock for old-k8s-version-661000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:41:55.307092    6517 start.go:364] duration metric: took 532µs to acquireMachinesLock for "old-k8s-version-661000"
	I0914 10:41:55.307205    6517 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:41:55.307528    6517 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:41:55.315995    6517 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:41:55.363465    6517 start.go:159] libmachine.API.Create for "old-k8s-version-661000" (driver="qemu2")
	I0914 10:41:55.363520    6517 client.go:168] LocalClient.Create starting
	I0914 10:41:55.363653    6517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:41:55.363718    6517 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:55.363735    6517 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:55.363796    6517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:41:55.363839    6517 main.go:141] libmachine: Decoding PEM data...
	I0914 10:41:55.363852    6517 main.go:141] libmachine: Parsing certificate...
	I0914 10:41:55.364460    6517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:41:55.535896    6517 main.go:141] libmachine: Creating SSH key...
	I0914 10:41:55.765314    6517 main.go:141] libmachine: Creating Disk image...
	I0914 10:41:55.765329    6517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:41:55.765532    6517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:55.775277    6517 main.go:141] libmachine: STDOUT: 
	I0914 10:41:55.775295    6517 main.go:141] libmachine: STDERR: 
	I0914 10:41:55.775351    6517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2 +20000M
	I0914 10:41:55.783621    6517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:41:55.783637    6517 main.go:141] libmachine: STDERR: 
	I0914 10:41:55.783650    6517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:55.783654    6517 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:41:55.783664    6517 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:41:55.783706    6517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f9:c3:43:1c:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:41:55.785425    6517 main.go:141] libmachine: STDOUT: 
	I0914 10:41:55.785438    6517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:41:55.785451    6517 client.go:171] duration metric: took 421.942167ms to LocalClient.Create
	I0914 10:41:57.787708    6517 start.go:128] duration metric: took 2.480146791s to createHost
	I0914 10:41:57.787773    6517 start.go:83] releasing machines lock for "old-k8s-version-661000", held for 2.480734834s
	W0914 10:41:57.788020    6517 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:41:57.797572    6517 out.go:201] 
	W0914 10:41:57.810530    6517 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:41:57.810569    6517 out.go:270] * 
	* 
	W0914 10:41:57.812027    6517 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:41:57.822532    6517 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (46.376584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-661000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-661000 create -f testdata/busybox.yaml: exit status 1 (27.863708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-661000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-661000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (29.437042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (29.773917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-661000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-661000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-661000 describe deploy/metrics-server -n kube-system: exit status 1 (27.153667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-661000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-661000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (30.210292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191952459s)

                                                
                                                
-- stdout --
	* [old-k8s-version-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-661000" primary control-plane node in "old-k8s-version-661000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:01.455362    6573 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:01.455505    6573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:01.455508    6573 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:01.455511    6573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:01.455641    6573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:01.456651    6573 out.go:352] Setting JSON to false
	I0914 10:42:01.473104    6573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4284,"bootTime":1726331437,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:01.473182    6573 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:01.478272    6573 out.go:177] * [old-k8s-version-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:01.485193    6573 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:01.485247    6573 notify.go:220] Checking for updates...
	I0914 10:42:01.492075    6573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:01.495178    6573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:01.498181    6573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:01.501204    6573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:01.504237    6573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:01.507540    6573 config.go:182] Loaded profile config "old-k8s-version-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 10:42:01.511162    6573 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 10:42:01.514179    6573 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:01.518198    6573 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:42:01.525095    6573 start.go:297] selected driver: qemu2
	I0914 10:42:01.525101    6573 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:01.525151    6573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:01.527550    6573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:01.527576    6573 cni.go:84] Creating CNI manager for ""
	I0914 10:42:01.527596    6573 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 10:42:01.527617    6573 start.go:340] cluster config:
	{Name:old-k8s-version-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-661000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:01.531262    6573 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:01.539069    6573 out.go:177] * Starting "old-k8s-version-661000" primary control-plane node in "old-k8s-version-661000" cluster
	I0914 10:42:01.543207    6573 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 10:42:01.543221    6573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 10:42:01.543233    6573 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:01.543295    6573 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:01.543300    6573 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 10:42:01.543363    6573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/old-k8s-version-661000/config.json ...
	I0914 10:42:01.543801    6573 start.go:360] acquireMachinesLock for old-k8s-version-661000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:01.543831    6573 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "old-k8s-version-661000"
	I0914 10:42:01.543840    6573 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:01.543846    6573 fix.go:54] fixHost starting: 
	I0914 10:42:01.543974    6573 fix.go:112] recreateIfNeeded on old-k8s-version-661000: state=Stopped err=<nil>
	W0914 10:42:01.543983    6573 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:01.547158    6573 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-661000" ...
	I0914 10:42:01.555172    6573 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:01.555207    6573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f9:c3:43:1c:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:42:01.557277    6573 main.go:141] libmachine: STDOUT: 
	I0914 10:42:01.557301    6573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:01.557333    6573 fix.go:56] duration metric: took 13.487708ms for fixHost
	I0914 10:42:01.557337    6573 start.go:83] releasing machines lock for "old-k8s-version-661000", held for 13.501833ms
	W0914 10:42:01.557342    6573 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:01.557377    6573 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:01.557382    6573 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:06.559375    6573 start.go:360] acquireMachinesLock for old-k8s-version-661000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:06.560023    6573 start.go:364] duration metric: took 488.083µs to acquireMachinesLock for "old-k8s-version-661000"
	I0914 10:42:06.560193    6573 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:06.560213    6573 fix.go:54] fixHost starting: 
	I0914 10:42:06.560942    6573 fix.go:112] recreateIfNeeded on old-k8s-version-661000: state=Stopped err=<nil>
	W0914 10:42:06.560971    6573 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:06.565559    6573 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-661000" ...
	I0914 10:42:06.573577    6573 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:06.573860    6573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f9:c3:43:1c:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/old-k8s-version-661000/disk.qcow2
	I0914 10:42:06.583858    6573 main.go:141] libmachine: STDOUT: 
	I0914 10:42:06.584188    6573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:06.584266    6573 fix.go:56] duration metric: took 24.05375ms for fixHost
	I0914 10:42:06.584283    6573 start.go:83] releasing machines lock for "old-k8s-version-661000", held for 24.181458ms
	W0914 10:42:06.584492    6573 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:06.592543    6573 out.go:201] 
	W0914 10:42:06.596624    6573 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:06.596650    6573 out.go:270] * 
	* 
	W0914 10:42:06.599222    6573 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:06.605430    6573 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-661000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (60.358167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-661000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (32.581459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-661000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-661000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-661000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.228167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-661000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-661000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (29.950583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-661000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (30.924792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-661000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-661000 --alsologtostderr -v=1: exit status 83 (40.311625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-661000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:06.871498    6592 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:06.872400    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:06.872406    6592 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:06.872409    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:06.872531    6592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:06.872735    6592 out.go:352] Setting JSON to false
	I0914 10:42:06.872742    6592 mustload.go:65] Loading cluster: old-k8s-version-661000
	I0914 10:42:06.872971    6592 config.go:182] Loaded profile config "old-k8s-version-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0914 10:42:06.877753    6592 out.go:177] * The control-plane node old-k8s-version-661000 host is not running: state=Stopped
	I0914 10:42:06.880789    6592 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-661000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-661000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (29.553042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (28.942125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.876371584s)

                                                
                                                
-- stdout --
	* [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-835000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:07.193603    6609 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:07.193753    6609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:07.193756    6609 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:07.193759    6609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:07.193895    6609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:07.195039    6609 out.go:352] Setting JSON to false
	I0914 10:42:07.211713    6609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4290,"bootTime":1726331437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:07.211779    6609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:07.214796    6609 out.go:177] * [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:07.221760    6609 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:07.221804    6609 notify.go:220] Checking for updates...
	I0914 10:42:07.228636    6609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:07.231723    6609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:07.234731    6609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:07.237791    6609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:07.240814    6609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:07.244046    6609 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:07.244109    6609 config.go:182] Loaded profile config "stopped-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0914 10:42:07.244154    6609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:07.248774    6609 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:42:07.255795    6609 start.go:297] selected driver: qemu2
	I0914 10:42:07.255803    6609 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:42:07.255810    6609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:07.258111    6609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:42:07.260711    6609 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:42:07.263861    6609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:07.263888    6609 cni.go:84] Creating CNI manager for ""
	I0914 10:42:07.263911    6609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:07.263917    6609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:42:07.263957    6609 start.go:340] cluster config:
	{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:07.267435    6609 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.274649    6609 out.go:177] * Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	I0914 10:42:07.278724    6609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:07.278788    6609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/no-preload-835000/config.json ...
	I0914 10:42:07.278801    6609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/no-preload-835000/config.json: {Name:mk8c5ed0d5785928aaad6cdac43480ca7e9e9642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:42:07.278812    6609 cache.go:107] acquiring lock: {Name:mke2dcde6b6e0cacbee12e7df28e773e9d60b74a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.278812    6609 cache.go:107] acquiring lock: {Name:mk52ea6d39113c7a356ef24c0c05730d902d678d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.278868    6609 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 10:42:07.278873    6609 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.583µs
	I0914 10:42:07.278879    6609 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 10:42:07.278884    6609 cache.go:107] acquiring lock: {Name:mkc536b69ce3f143c626ea22250430df6d382d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.278927    6609 cache.go:107] acquiring lock: {Name:mk719fff8f019361b9ece73560ea63a0b34ef911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.278987    6609 cache.go:107] acquiring lock: {Name:mk712df137fd353ba8ae25b85531286b7675bc25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.279020    6609 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:07.278965    6609 cache.go:107] acquiring lock: {Name:mk511ccc6408e49656668f754e2144200a3179b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.279035    6609 cache.go:107] acquiring lock: {Name:mkd2ccebc03e326eaf0f60de33df5d21b5dd2e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.279072    6609 cache.go:107] acquiring lock: {Name:mk42f03ea37a9334b51ae1c5e7b5f23f4fa62fda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:07.279049    6609 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "no-preload-835000"
	I0914 10:42:07.279090    6609 start.go:93] Provisioning new machine with config: &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:07.279114    6609 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:07.279112    6609 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 10:42:07.279113    6609 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 10:42:07.279179    6609 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 10:42:07.279245    6609 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 10:42:07.279285    6609 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 10:42:07.279599    6609 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 10:42:07.279712    6609 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 10:42:07.283747    6609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:07.287989    6609 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 10:42:07.291414    6609 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 10:42:07.291478    6609 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 10:42:07.291549    6609 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 10:42:07.291550    6609 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 10:42:07.291543    6609 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 10:42:07.291564    6609 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 10:42:07.299714    6609 start.go:159] libmachine.API.Create for "no-preload-835000" (driver="qemu2")
	I0914 10:42:07.299735    6609 client.go:168] LocalClient.Create starting
	I0914 10:42:07.299800    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:07.299828    6609 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:07.299840    6609 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:07.299880    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:07.299903    6609 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:07.299913    6609 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:07.300216    6609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:07.467686    6609 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:07.583517    6609 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:07.583534    6609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:07.583710    6609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:07.593679    6609 main.go:141] libmachine: STDOUT: 
	I0914 10:42:07.593696    6609 main.go:141] libmachine: STDERR: 
	I0914 10:42:07.593758    6609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2 +20000M
	I0914 10:42:07.602337    6609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:07.602352    6609 main.go:141] libmachine: STDERR: 
	I0914 10:42:07.602365    6609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:07.602369    6609 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:07.602384    6609 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:07.602408    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:05:eb:32:45:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:07.604238    6609 main.go:141] libmachine: STDOUT: 
	I0914 10:42:07.604255    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:07.604273    6609 client.go:171] duration metric: took 304.543833ms to LocalClient.Create
	I0914 10:42:07.657388    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0914 10:42:07.674321    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 10:42:07.703413    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 10:42:07.708680    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 10:42:07.722351    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 10:42:07.751338    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0914 10:42:07.783675    6609 cache.go:162] opening:  /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 10:42:07.796435    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0914 10:42:07.796447    6609 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 517.584209ms
	I0914 10:42:07.796458    6609 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0914 10:42:09.604407    6609 start.go:128] duration metric: took 2.325375416s to createHost
	I0914 10:42:09.604460    6609 start.go:83] releasing machines lock for "no-preload-835000", held for 2.325472375s
	W0914 10:42:09.604502    6609 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:09.609973    6609 out.go:177] * Deleting "no-preload-835000" in qemu2 ...
	W0914 10:42:09.640060    6609 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:09.640073    6609 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:10.474067    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0914 10:42:10.474087    6609 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.195233916s
	I0914 10:42:10.474100    6609 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0914 10:42:10.940385    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0914 10:42:10.940419    6609 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.661679959s
	I0914 10:42:10.940431    6609 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0914 10:42:11.142671    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0914 10:42:11.142698    6609 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.863896958s
	I0914 10:42:11.142712    6609 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0914 10:42:11.281709    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0914 10:42:11.281749    6609 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.002941708s
	I0914 10:42:11.281767    6609 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0914 10:42:11.374896    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0914 10:42:11.374916    6609 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.096279625s
	I0914 10:42:11.374925    6609 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0914 10:42:14.640051    6609 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:14.640565    6609 start.go:364] duration metric: took 424.834µs to acquireMachinesLock for "no-preload-835000"
	I0914 10:42:14.640702    6609 start.go:93] Provisioning new machine with config: &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:14.641014    6609 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:14.651604    6609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:14.702873    6609 start.go:159] libmachine.API.Create for "no-preload-835000" (driver="qemu2")
	I0914 10:42:14.702929    6609 client.go:168] LocalClient.Create starting
	I0914 10:42:14.703053    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:14.703122    6609 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:14.703141    6609 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:14.703216    6609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:14.703267    6609 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:14.703283    6609 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:14.703873    6609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:14.881865    6609 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:14.981910    6609 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:14.981917    6609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:14.982115    6609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:14.991827    6609 main.go:141] libmachine: STDOUT: 
	I0914 10:42:14.991842    6609 main.go:141] libmachine: STDERR: 
	I0914 10:42:14.991912    6609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2 +20000M
	I0914 10:42:15.000303    6609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:15.000329    6609 main.go:141] libmachine: STDERR: 
	I0914 10:42:15.000344    6609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:15.000354    6609 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:15.000366    6609 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:15.000404    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:48:43:d8:5a:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:15.002244    6609 main.go:141] libmachine: STDOUT: 
	I0914 10:42:15.002272    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:15.002288    6609 client.go:171] duration metric: took 299.362875ms to LocalClient.Create
	I0914 10:42:16.062134    6609 cache.go:157] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0914 10:42:16.062178    6609 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.783631584s
	I0914 10:42:16.062196    6609 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0914 10:42:16.062304    6609 cache.go:87] Successfully saved all images to host disk.
	I0914 10:42:17.004392    6609 start.go:128] duration metric: took 2.363413416s to createHost
	I0914 10:42:17.004465    6609 start.go:83] releasing machines lock for "no-preload-835000", held for 2.363968417s
	W0914 10:42:17.004756    6609 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:17.016316    6609 out.go:201] 
	W0914 10:42:17.021385    6609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:17.021416    6609 out.go:270] * 
	* 
	W0914 10:42:17.023435    6609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:17.030250    6609 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (50.208375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.496131833s)

                                                
                                                
-- stdout --
	* [embed-certs-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-486000" primary control-plane node in "embed-certs-486000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:16.399643    6656 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:16.399793    6656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:16.399797    6656 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:16.399799    6656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:16.399937    6656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:16.400991    6656 out.go:352] Setting JSON to false
	I0914 10:42:16.417487    6656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4299,"bootTime":1726331437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:16.417553    6656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:16.421655    6656 out.go:177] * [embed-certs-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:16.428604    6656 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:16.428717    6656 notify.go:220] Checking for updates...
	I0914 10:42:16.434516    6656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:16.437585    6656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:16.438877    6656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:16.441541    6656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:16.444529    6656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:16.448038    6656 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:16.448107    6656 config.go:182] Loaded profile config "no-preload-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:16.448151    6656 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:16.452481    6656 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:42:16.459591    6656 start.go:297] selected driver: qemu2
	I0914 10:42:16.459599    6656 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:42:16.459607    6656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:16.461986    6656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:42:16.464512    6656 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:42:16.467619    6656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:16.467641    6656 cni.go:84] Creating CNI manager for ""
	I0914 10:42:16.467670    6656 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:16.467684    6656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:42:16.467728    6656 start.go:340] cluster config:
	{Name:embed-certs-486000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:16.471432    6656 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:16.478551    6656 out.go:177] * Starting "embed-certs-486000" primary control-plane node in "embed-certs-486000" cluster
	I0914 10:42:16.482566    6656 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:16.482582    6656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:16.482594    6656 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:16.482662    6656 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:16.482668    6656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:16.482730    6656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/embed-certs-486000/config.json ...
	I0914 10:42:16.482744    6656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/embed-certs-486000/config.json: {Name:mkd2b6c3c06527c1f5dcb1138d7070087fee28de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:42:16.483247    6656 start.go:360] acquireMachinesLock for embed-certs-486000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:17.004594    6656 start.go:364] duration metric: took 521.321042ms to acquireMachinesLock for "embed-certs-486000"
	I0914 10:42:17.004760    6656 start.go:93] Provisioning new machine with config: &{Name:embed-certs-486000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:17.004923    6656 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:17.012301    6656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:17.061596    6656 start.go:159] libmachine.API.Create for "embed-certs-486000" (driver="qemu2")
	I0914 10:42:17.061651    6656 client.go:168] LocalClient.Create starting
	I0914 10:42:17.061790    6656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:17.061847    6656 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:17.061863    6656 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:17.061929    6656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:17.061976    6656 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:17.061993    6656 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:17.062665    6656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:17.290715    6656 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:17.409240    6656 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:17.409247    6656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:17.409419    6656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:17.418171    6656 main.go:141] libmachine: STDOUT: 
	I0914 10:42:17.418195    6656 main.go:141] libmachine: STDERR: 
	I0914 10:42:17.418245    6656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2 +20000M
	I0914 10:42:17.426605    6656 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:17.426620    6656 main.go:141] libmachine: STDERR: 
	I0914 10:42:17.426640    6656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:17.426645    6656 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:17.426657    6656 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:17.426681    6656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c5:8b:93:ca:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:17.428460    6656 main.go:141] libmachine: STDOUT: 
	I0914 10:42:17.428476    6656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:17.428499    6656 client.go:171] duration metric: took 366.854416ms to LocalClient.Create
	I0914 10:42:19.430612    6656 start.go:128] duration metric: took 2.425734s to createHost
	I0914 10:42:19.430730    6656 start.go:83] releasing machines lock for "embed-certs-486000", held for 2.426153917s
	W0914 10:42:19.430805    6656 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:19.438039    6656 out.go:177] * Deleting "embed-certs-486000" in qemu2 ...
	W0914 10:42:19.475939    6656 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:19.475978    6656 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:24.477934    6656 start.go:360] acquireMachinesLock for embed-certs-486000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:24.478394    6656 start.go:364] duration metric: took 379.458µs to acquireMachinesLock for "embed-certs-486000"
	I0914 10:42:24.478521    6656 start.go:93] Provisioning new machine with config: &{Name:embed-certs-486000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:24.478902    6656 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:24.488469    6656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:24.537435    6656 start.go:159] libmachine.API.Create for "embed-certs-486000" (driver="qemu2")
	I0914 10:42:24.537512    6656 client.go:168] LocalClient.Create starting
	I0914 10:42:24.537691    6656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:24.537762    6656 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:24.537783    6656 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:24.537853    6656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:24.537899    6656 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:24.537910    6656 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:24.538541    6656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:24.712175    6656 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:24.786209    6656 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:24.786215    6656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:24.786387    6656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:24.795773    6656 main.go:141] libmachine: STDOUT: 
	I0914 10:42:24.795790    6656 main.go:141] libmachine: STDERR: 
	I0914 10:42:24.795861    6656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2 +20000M
	I0914 10:42:24.803758    6656 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:24.803779    6656 main.go:141] libmachine: STDERR: 
	I0914 10:42:24.803789    6656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:24.803794    6656 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:24.803801    6656 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:24.803829    6656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:57:ce:67:db:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:24.805511    6656 main.go:141] libmachine: STDOUT: 
	I0914 10:42:24.805526    6656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:24.805538    6656 client.go:171] duration metric: took 268.013167ms to LocalClient.Create
	I0914 10:42:26.807157    6656 start.go:128] duration metric: took 2.328281375s to createHost
	I0914 10:42:26.807254    6656 start.go:83] releasing machines lock for "embed-certs-486000", held for 2.32893425s
	W0914 10:42:26.807622    6656 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:26.812390    6656 out.go:201] 
	W0914 10:42:26.832372    6656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:26.832401    6656 out.go:270] * 
	* 
	W0914 10:42:26.834937    6656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:26.849342    6656 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (65.098584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-835000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-835000 create -f testdata/busybox.yaml: exit status 1 (30.14375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-835000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (32.945083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (33.238958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-835000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system: exit status 1 (27.902291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-835000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (30.293792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.415831125s)

                                                
                                                
-- stdout --
	* [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	* Restarting existing qemu2 VM for "no-preload-835000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-835000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:21.499015    6702 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:21.499133    6702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:21.499136    6702 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:21.499139    6702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:21.499279    6702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:21.500351    6702 out.go:352] Setting JSON to false
	I0914 10:42:21.516541    6702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4304,"bootTime":1726331437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:21.516608    6702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:21.521684    6702 out.go:177] * [no-preload-835000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:21.527702    6702 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:21.527825    6702 notify.go:220] Checking for updates...
	I0914 10:42:21.534693    6702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:21.537684    6702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:21.540680    6702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:21.543767    6702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:21.545273    6702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:21.549018    6702 config.go:182] Loaded profile config "no-preload-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:21.549267    6702 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:21.553682    6702 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:42:21.559634    6702 start.go:297] selected driver: qemu2
	I0914 10:42:21.559640    6702 start.go:901] validating driver "qemu2" against &{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:21.559690    6702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:21.561911    6702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:21.561936    6702 cni.go:84] Creating CNI manager for ""
	I0914 10:42:21.561959    6702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:21.561983    6702 start.go:340] cluster config:
	{Name:no-preload-835000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-835000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:21.565523    6702 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.573671    6702 out.go:177] * Starting "no-preload-835000" primary control-plane node in "no-preload-835000" cluster
	I0914 10:42:21.577858    6702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:21.577943    6702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/no-preload-835000/config.json ...
	I0914 10:42:21.577990    6702 cache.go:107] acquiring lock: {Name:mke2dcde6b6e0cacbee12e7df28e773e9d60b74a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.577990    6702 cache.go:107] acquiring lock: {Name:mk42f03ea37a9334b51ae1c5e7b5f23f4fa62fda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578009    6702 cache.go:107] acquiring lock: {Name:mkc536b69ce3f143c626ea22250430df6d382d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578065    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 10:42:21.578074    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0914 10:42:21.578072    6702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.458µs
	I0914 10:42:21.578079    6702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 102.167µs
	I0914 10:42:21.578087    6702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0914 10:42:21.578082    6702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 10:42:21.577985    6702 cache.go:107] acquiring lock: {Name:mk52ea6d39113c7a356ef24c0c05730d902d678d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578088    6702 cache.go:107] acquiring lock: {Name:mk712df137fd353ba8ae25b85531286b7675bc25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578100    6702 cache.go:107] acquiring lock: {Name:mk511ccc6408e49656668f754e2144200a3179b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578131    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0914 10:42:21.578133    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0914 10:42:21.578139    6702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 51.166µs
	I0914 10:42:21.578137    6702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 164.333µs
	I0914 10:42:21.578148    6702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0914 10:42:21.578155    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0914 10:42:21.578094    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0914 10:42:21.578159    6702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0914 10:42:21.578160    6702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 60.875µs
	I0914 10:42:21.578168    6702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0914 10:42:21.578136    6702 cache.go:107] acquiring lock: {Name:mkd2ccebc03e326eaf0f60de33df5d21b5dd2e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578164    6702 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 155.5µs
	I0914 10:42:21.578210    6702 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0914 10:42:21.578206    6702 cache.go:107] acquiring lock: {Name:mk719fff8f019361b9ece73560ea63a0b34ef911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:21.578242    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0914 10:42:21.578247    6702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 137µs
	I0914 10:42:21.578251    6702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0914 10:42:21.578259    6702 cache.go:115] /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0914 10:42:21.578263    6702 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 91.375µs
	I0914 10:42:21.578272    6702 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0914 10:42:21.578277    6702 cache.go:87] Successfully saved all images to host disk.
	I0914 10:42:21.578405    6702 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:21.578442    6702 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "no-preload-835000"
	I0914 10:42:21.578451    6702 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:21.578455    6702 fix.go:54] fixHost starting: 
	I0914 10:42:21.578580    6702 fix.go:112] recreateIfNeeded on no-preload-835000: state=Stopped err=<nil>
	W0914 10:42:21.578590    6702 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:21.586673    6702 out.go:177] * Restarting existing qemu2 VM for "no-preload-835000" ...
	I0914 10:42:21.590711    6702 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:21.590761    6702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:48:43:d8:5a:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:21.592977    6702 main.go:141] libmachine: STDOUT: 
	I0914 10:42:21.593002    6702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:21.593039    6702 fix.go:56] duration metric: took 14.582333ms for fixHost
	I0914 10:42:21.593045    6702 start.go:83] releasing machines lock for "no-preload-835000", held for 14.598209ms
	W0914 10:42:21.593051    6702 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:21.593085    6702 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:21.593090    6702 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:26.595109    6702 start.go:360] acquireMachinesLock for no-preload-835000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:26.807432    6702 start.go:364] duration metric: took 212.249875ms to acquireMachinesLock for "no-preload-835000"
	I0914 10:42:26.807610    6702 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:26.807635    6702 fix.go:54] fixHost starting: 
	I0914 10:42:26.808380    6702 fix.go:112] recreateIfNeeded on no-preload-835000: state=Stopped err=<nil>
	W0914 10:42:26.808406    6702 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:26.828268    6702 out.go:177] * Restarting existing qemu2 VM for "no-preload-835000" ...
	I0914 10:42:26.835282    6702 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:26.835607    6702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:48:43:d8:5a:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/no-preload-835000/disk.qcow2
	I0914 10:42:26.844559    6702 main.go:141] libmachine: STDOUT: 
	I0914 10:42:26.845127    6702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:26.845229    6702 fix.go:56] duration metric: took 37.592417ms for fixHost
	I0914 10:42:26.845250    6702 start.go:83] releasing machines lock for "no-preload-835000", held for 37.777292ms
	W0914 10:42:26.845504    6702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-835000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:26.859263    6702 out.go:201] 
	W0914 10:42:26.863379    6702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:26.863422    6702 out.go:270] * 
	* 
	W0914 10:42:26.865756    6702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:26.874337    6702 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-835000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (52.998625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-486000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-486000 create -f testdata/busybox.yaml: exit status 1 (31.061208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-486000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-486000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (30.470458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (35.207083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-835000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (33.070541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-835000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.675917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-835000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-835000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (31.051458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-486000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-486000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-486000 describe deploy/metrics-server -n kube-system: exit status 1 (28.413792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-486000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-486000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (38.47925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-835000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (31.27275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1: exit status 83 (52.312709ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-835000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-835000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:27.148918    6735 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:27.149036    6735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:27.149042    6735 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:27.149045    6735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:27.149171    6735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:27.149388    6735 out.go:352] Setting JSON to false
	I0914 10:42:27.149393    6735 mustload.go:65] Loading cluster: no-preload-835000
	I0914 10:42:27.149609    6735 config.go:182] Loaded profile config "no-preload-835000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:27.156273    6735 out.go:177] * The control-plane node no-preload-835000 host is not running: state=Stopped
	I0914 10:42:27.164269    6735 out.go:177]   To start a cluster, run: "minikube start -p no-preload-835000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-835000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (32.54625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (27.246875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-835000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.084994708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-231000" primary control-plane node in "default-k8s-diff-port-231000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-231000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:27.578955    6767 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:27.579084    6767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:27.579087    6767 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:27.579090    6767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:27.579239    6767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:27.580321    6767 out.go:352] Setting JSON to false
	I0914 10:42:27.596466    6767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4310,"bootTime":1726331437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:27.596532    6767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:27.600379    6767 out.go:177] * [default-k8s-diff-port-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:27.607273    6767 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:27.607312    6767 notify.go:220] Checking for updates...
	I0914 10:42:27.614207    6767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:27.617254    6767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:27.620288    6767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:27.621718    6767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:27.625243    6767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:27.628623    6767 config.go:182] Loaded profile config "embed-certs-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:27.628681    6767 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:27.628724    6767 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:27.633095    6767 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:42:27.640271    6767 start.go:297] selected driver: qemu2
	I0914 10:42:27.640278    6767 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:42:27.640286    6767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:27.642591    6767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 10:42:27.645270    6767 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:42:27.648305    6767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:27.648339    6767 cni.go:84] Creating CNI manager for ""
	I0914 10:42:27.648367    6767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:27.648378    6767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:42:27.648404    6767 start.go:340] cluster config:
	{Name:default-k8s-diff-port-231000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:27.652248    6767 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:27.658212    6767 out.go:177] * Starting "default-k8s-diff-port-231000" primary control-plane node in "default-k8s-diff-port-231000" cluster
	I0914 10:42:27.662236    6767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:27.662252    6767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:27.662264    6767 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:27.662335    6767 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:27.662342    6767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:27.662405    6767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/default-k8s-diff-port-231000/config.json ...
	I0914 10:42:27.662417    6767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/default-k8s-diff-port-231000/config.json: {Name:mk38ed35da09b7b7a67390ab8d8be6f52c6c25c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:42:27.662652    6767 start.go:360] acquireMachinesLock for default-k8s-diff-port-231000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:27.662686    6767 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "default-k8s-diff-port-231000"
	I0914 10:42:27.662697    6767 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:27.662726    6767 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:27.671236    6767 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:27.689397    6767 start.go:159] libmachine.API.Create for "default-k8s-diff-port-231000" (driver="qemu2")
	I0914 10:42:27.689431    6767 client.go:168] LocalClient.Create starting
	I0914 10:42:27.689488    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:27.689521    6767 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:27.689530    6767 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:27.689575    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:27.689600    6767 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:27.689611    6767 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:27.690066    6767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:27.985620    6767 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:28.178001    6767 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:28.178009    6767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:28.178202    6767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:28.188113    6767 main.go:141] libmachine: STDOUT: 
	I0914 10:42:28.188135    6767 main.go:141] libmachine: STDERR: 
	I0914 10:42:28.188214    6767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2 +20000M
	I0914 10:42:28.196604    6767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:28.196627    6767 main.go:141] libmachine: STDERR: 
	I0914 10:42:28.196640    6767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:28.196646    6767 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:28.196659    6767 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:28.196683    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:30:ce:a0:e8:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:28.198298    6767 main.go:141] libmachine: STDOUT: 
	I0914 10:42:28.198312    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:28.198330    6767 client.go:171] duration metric: took 508.913542ms to LocalClient.Create
	I0914 10:42:30.200493    6767 start.go:128] duration metric: took 2.537843958s to createHost
	I0914 10:42:30.200566    6767 start.go:83] releasing machines lock for "default-k8s-diff-port-231000", held for 2.5379775s
	W0914 10:42:30.200626    6767 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:30.213919    6767 out.go:177] * Deleting "default-k8s-diff-port-231000" in qemu2 ...
	W0914 10:42:30.246732    6767 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:30.246761    6767 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:35.248746    6767 start.go:360] acquireMachinesLock for default-k8s-diff-port-231000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:35.249226    6767 start.go:364] duration metric: took 389.791µs to acquireMachinesLock for "default-k8s-diff-port-231000"
	I0914 10:42:35.249364    6767 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:35.249635    6767 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:35.258320    6767 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:35.310145    6767 start.go:159] libmachine.API.Create for "default-k8s-diff-port-231000" (driver="qemu2")
	I0914 10:42:35.310195    6767 client.go:168] LocalClient.Create starting
	I0914 10:42:35.310311    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:35.310379    6767 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:35.310397    6767 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:35.310461    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:35.310506    6767 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:35.310519    6767 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:35.311047    6767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:35.485571    6767 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:35.548513    6767 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:35.548518    6767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:35.548686    6767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:35.557833    6767 main.go:141] libmachine: STDOUT: 
	I0914 10:42:35.557854    6767 main.go:141] libmachine: STDERR: 
	I0914 10:42:35.557937    6767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2 +20000M
	I0914 10:42:35.565747    6767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:35.565763    6767 main.go:141] libmachine: STDERR: 
	I0914 10:42:35.565774    6767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:35.565785    6767 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:35.565794    6767 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:35.565822    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:27:b9:2d:02:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:35.567433    6767 main.go:141] libmachine: STDOUT: 
	I0914 10:42:35.567452    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:35.567464    6767 client.go:171] duration metric: took 257.273792ms to LocalClient.Create
	I0914 10:42:37.567985    6767 start.go:128] duration metric: took 2.318419166s to createHost
	I0914 10:42:37.568031    6767 start.go:83] releasing machines lock for "default-k8s-diff-port-231000", held for 2.318870542s
	W0914 10:42:37.568311    6767 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-231000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-231000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:37.590017    6767 out.go:201] 
	W0914 10:42:37.599056    6767 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:37.599084    6767 out.go:270] * 
	* 
	W0914 10:42:37.601686    6767 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:37.613972    6767 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (67.95375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.17175425s)

                                                
                                                
-- stdout --
	* [embed-certs-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-486000" primary control-plane node in "embed-certs-486000" cluster
	* Restarting existing qemu2 VM for "embed-certs-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:30.505178    6796 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:30.505310    6796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:30.505314    6796 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:30.505316    6796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:30.505458    6796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:30.506442    6796 out.go:352] Setting JSON to false
	I0914 10:42:30.522547    6796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4313,"bootTime":1726331437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:30.522621    6796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:30.526962    6796 out.go:177] * [embed-certs-486000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:30.532837    6796 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:30.532906    6796 notify.go:220] Checking for updates...
	I0914 10:42:30.539745    6796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:30.542838    6796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:30.545862    6796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:30.547392    6796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:30.550822    6796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:30.554196    6796 config.go:182] Loaded profile config "embed-certs-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:30.554468    6796 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:30.558748    6796 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:42:30.565823    6796 start.go:297] selected driver: qemu2
	I0914 10:42:30.565829    6796 start.go:901] validating driver "qemu2" against &{Name:embed-certs-486000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:30.565881    6796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:30.568108    6796 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:30.568134    6796 cni.go:84] Creating CNI manager for ""
	I0914 10:42:30.568177    6796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:30.568206    6796 start.go:340] cluster config:
	{Name:embed-certs-486000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-486000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:30.571656    6796 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:30.579810    6796 out.go:177] * Starting "embed-certs-486000" primary control-plane node in "embed-certs-486000" cluster
	I0914 10:42:30.583840    6796 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:30.583852    6796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:30.583861    6796 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:30.583915    6796 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:30.583920    6796 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:30.583968    6796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/embed-certs-486000/config.json ...
	I0914 10:42:30.584475    6796 start.go:360] acquireMachinesLock for embed-certs-486000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:30.584508    6796 start.go:364] duration metric: took 27.333µs to acquireMachinesLock for "embed-certs-486000"
	I0914 10:42:30.584517    6796 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:30.584523    6796 fix.go:54] fixHost starting: 
	I0914 10:42:30.584640    6796 fix.go:112] recreateIfNeeded on embed-certs-486000: state=Stopped err=<nil>
	W0914 10:42:30.584650    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:30.588896    6796 out.go:177] * Restarting existing qemu2 VM for "embed-certs-486000" ...
	I0914 10:42:30.596840    6796 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:30.596873    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:57:ce:67:db:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:30.598858    6796 main.go:141] libmachine: STDOUT: 
	I0914 10:42:30.598876    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:30.598904    6796 fix.go:56] duration metric: took 14.383125ms for fixHost
	I0914 10:42:30.598910    6796 start.go:83] releasing machines lock for "embed-certs-486000", held for 14.397583ms
	W0914 10:42:30.598914    6796 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:30.598942    6796 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:30.598946    6796 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:35.600884    6796 start.go:360] acquireMachinesLock for embed-certs-486000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:37.568166    6796 start.go:364] duration metric: took 1.967301917s to acquireMachinesLock for "embed-certs-486000"
	I0914 10:42:37.568362    6796 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:37.568378    6796 fix.go:54] fixHost starting: 
	I0914 10:42:37.569126    6796 fix.go:112] recreateIfNeeded on embed-certs-486000: state=Stopped err=<nil>
	W0914 10:42:37.569156    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:37.594953    6796 out.go:177] * Restarting existing qemu2 VM for "embed-certs-486000" ...
	I0914 10:42:37.601971    6796 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:37.602225    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:57:ce:67:db:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/embed-certs-486000/disk.qcow2
	I0914 10:42:37.611634    6796 main.go:141] libmachine: STDOUT: 
	I0914 10:42:37.611701    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:37.611776    6796 fix.go:56] duration metric: took 43.396833ms for fixHost
	I0914 10:42:37.611794    6796 start.go:83] releasing machines lock for "embed-certs-486000", held for 43.573709ms
	W0914 10:42:37.612008    6796 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:37.624893    6796 out.go:201] 
	W0914 10:42:37.629063    6796 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:37.629093    6796 out.go:270] * 
	* 
	W0914 10:42:37.632073    6796 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:37.637889    6796 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-486000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (57.186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-231000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-231000 create -f testdata/busybox.yaml: exit status 1 (31.293125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-231000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-231000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (32.157875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (35.09725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-486000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (35.024375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-486000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.127ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-486000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (33.023584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-486000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (31.533458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-231000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-231000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-231000 describe deploy/metrics-server -n kube-system: exit status 1 (29.563708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-231000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-231000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (31.500417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-486000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-486000 --alsologtostderr -v=1: exit status 83 (51.338667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-486000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-486000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:37.925072    6829 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:37.925202    6829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:37.925206    6829 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:37.925208    6829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:37.925347    6829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:37.925573    6829 out.go:352] Setting JSON to false
	I0914 10:42:37.925578    6829 mustload.go:65] Loading cluster: embed-certs-486000
	I0914 10:42:37.925814    6829 config.go:182] Loaded profile config "embed-certs-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:37.930750    6829 out.go:177] * The control-plane node embed-certs-486000 host is not running: state=Stopped
	I0914 10:42:37.937666    6829 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-486000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-486000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (30.492667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (28.41125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.854661375s)

                                                
                                                
-- stdout --
	* [newest-cni-566000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-566000" primary control-plane node in "newest-cni-566000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-566000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:38.244826    6854 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:38.244975    6854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:38.244979    6854 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:38.244981    6854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:38.245135    6854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:38.246448    6854 out.go:352] Setting JSON to false
	I0914 10:42:38.262948    6854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4321,"bootTime":1726331437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:38.263013    6854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:38.267668    6854 out.go:177] * [newest-cni-566000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:38.274483    6854 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:38.274532    6854 notify.go:220] Checking for updates...
	I0914 10:42:38.280641    6854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:38.281904    6854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:38.284648    6854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:38.287634    6854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:38.290644    6854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:38.293941    6854 config.go:182] Loaded profile config "default-k8s-diff-port-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:38.294003    6854 config.go:182] Loaded profile config "multinode-699000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:38.294056    6854 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:38.297619    6854 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 10:42:38.304689    6854 start.go:297] selected driver: qemu2
	I0914 10:42:38.304695    6854 start.go:901] validating driver "qemu2" against <nil>
	I0914 10:42:38.304701    6854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:38.306990    6854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0914 10:42:38.307032    6854 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0914 10:42:38.315622    6854 out.go:177] * Automatically selected the socket_vmnet network
	I0914 10:42:38.319688    6854 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 10:42:38.319706    6854 cni.go:84] Creating CNI manager for ""
	I0914 10:42:38.319736    6854 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:38.319742    6854 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 10:42:38.319773    6854 start.go:340] cluster config:
	{Name:newest-cni-566000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-566000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:38.323374    6854 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:38.331592    6854 out.go:177] * Starting "newest-cni-566000" primary control-plane node in "newest-cni-566000" cluster
	I0914 10:42:38.335619    6854 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:38.335648    6854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:38.335658    6854 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:38.335718    6854 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:38.335723    6854 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:38.335780    6854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/newest-cni-566000/config.json ...
	I0914 10:42:38.335791    6854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/newest-cni-566000/config.json: {Name:mk89b3832513248a6d9c81a2ff61242fb938e51c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 10:42:38.336107    6854 start.go:360] acquireMachinesLock for newest-cni-566000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:38.336141    6854 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "newest-cni-566000"
	I0914 10:42:38.336152    6854 start.go:93] Provisioning new machine with config: &{Name:newest-cni-566000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-566000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:38.336189    6854 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:38.343618    6854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:38.361562    6854 start.go:159] libmachine.API.Create for "newest-cni-566000" (driver="qemu2")
	I0914 10:42:38.361594    6854 client.go:168] LocalClient.Create starting
	I0914 10:42:38.361658    6854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:38.361694    6854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:38.361703    6854 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:38.361740    6854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:38.361763    6854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:38.361770    6854 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:38.362167    6854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:38.525798    6854 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:38.594797    6854 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:38.594802    6854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:38.594976    6854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:38.604119    6854 main.go:141] libmachine: STDOUT: 
	I0914 10:42:38.604142    6854 main.go:141] libmachine: STDERR: 
	I0914 10:42:38.604213    6854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2 +20000M
	I0914 10:42:38.612241    6854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:38.612258    6854 main.go:141] libmachine: STDERR: 
	I0914 10:42:38.612277    6854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:38.612282    6854 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:38.612298    6854 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:38.612323    6854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:f4:aa:3f:4e:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:38.613929    6854 main.go:141] libmachine: STDOUT: 
	I0914 10:42:38.613943    6854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:38.613972    6854 client.go:171] duration metric: took 252.382291ms to LocalClient.Create
	I0914 10:42:40.616094    6854 start.go:128] duration metric: took 2.279975042s to createHost
	I0914 10:42:40.616203    6854 start.go:83] releasing machines lock for "newest-cni-566000", held for 2.280123375s
	W0914 10:42:40.616262    6854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:40.632684    6854 out.go:177] * Deleting "newest-cni-566000" in qemu2 ...
	W0914 10:42:40.667374    6854 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:40.667396    6854 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:45.669341    6854 start.go:360] acquireMachinesLock for newest-cni-566000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:45.673964    6854 start.go:364] duration metric: took 4.541791ms to acquireMachinesLock for "newest-cni-566000"
	I0914 10:42:45.674020    6854 start.go:93] Provisioning new machine with config: &{Name:newest-cni-566000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-566000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 10:42:45.674332    6854 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 10:42:45.684818    6854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 10:42:45.732632    6854 start.go:159] libmachine.API.Create for "newest-cni-566000" (driver="qemu2")
	I0914 10:42:45.732686    6854 client.go:168] LocalClient.Create starting
	I0914 10:42:45.732793    6854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/ca.pem
	I0914 10:42:45.732850    6854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:45.732867    6854 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:45.732922    6854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19643-1079/.minikube/certs/cert.pem
	I0914 10:42:45.732967    6854 main.go:141] libmachine: Decoding PEM data...
	I0914 10:42:45.732977    6854 main.go:141] libmachine: Parsing certificate...
	I0914 10:42:45.733498    6854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso...
	I0914 10:42:45.908028    6854 main.go:141] libmachine: Creating SSH key...
	I0914 10:42:46.007375    6854 main.go:141] libmachine: Creating Disk image...
	I0914 10:42:46.007384    6854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 10:42:46.007579    6854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2.raw /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:46.018022    6854 main.go:141] libmachine: STDOUT: 
	I0914 10:42:46.018045    6854 main.go:141] libmachine: STDERR: 
	I0914 10:42:46.018148    6854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2 +20000M
	I0914 10:42:46.027242    6854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 10:42:46.027265    6854 main.go:141] libmachine: STDERR: 
	I0914 10:42:46.027278    6854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:46.027284    6854 main.go:141] libmachine: Starting QEMU VM...
	I0914 10:42:46.027301    6854 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:46.027329    6854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:39:be:a5:74:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:46.029331    6854 main.go:141] libmachine: STDOUT: 
	I0914 10:42:46.029349    6854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:46.029360    6854 client.go:171] duration metric: took 296.681333ms to LocalClient.Create
	I0914 10:42:48.031588    6854 start.go:128] duration metric: took 2.357298708s to createHost
	I0914 10:42:48.031695    6854 start.go:83] releasing machines lock for "newest-cni-566000", held for 2.357805834s
	W0914 10:42:48.032042    6854 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-566000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-566000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:48.041696    6854 out.go:201] 
	W0914 10:42:48.046783    6854 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:48.046822    6854 out.go:270] * 
	* 
	W0914 10:42:48.049476    6854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:48.057653    6854 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (68.591792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-566000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.679101459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-231000" primary control-plane node in "default-k8s-diff-port-231000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-231000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-231000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:40.058953    6877 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:40.059089    6877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:40.059092    6877 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:40.059095    6877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:40.059212    6877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:40.060223    6877 out.go:352] Setting JSON to false
	I0914 10:42:40.076325    6877 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4323,"bootTime":1726331437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:40.076391    6877 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:40.081636    6877 out.go:177] * [default-k8s-diff-port-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:40.088679    6877 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:40.088737    6877 notify.go:220] Checking for updates...
	I0914 10:42:40.095641    6877 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:40.098656    6877 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:40.101717    6877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:40.104700    6877 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:40.107647    6877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:40.110943    6877 config.go:182] Loaded profile config "default-k8s-diff-port-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:40.111201    6877 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:40.115703    6877 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:42:40.122606    6877 start.go:297] selected driver: qemu2
	I0914 10:42:40.122611    6877 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:40.122657    6877 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:40.125146    6877 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 10:42:40.125169    6877 cni.go:84] Creating CNI manager for ""
	I0914 10:42:40.125190    6877 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:40.125214    6877 start.go:340] cluster config:
	{Name:default-k8s-diff-port-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-231000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:40.128684    6877 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:40.135536    6877 out.go:177] * Starting "default-k8s-diff-port-231000" primary control-plane node in "default-k8s-diff-port-231000" cluster
	I0914 10:42:40.139675    6877 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:40.139687    6877 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:40.139696    6877 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:40.139742    6877 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:40.139747    6877 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:40.139799    6877 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/default-k8s-diff-port-231000/config.json ...
	I0914 10:42:40.140249    6877 start.go:360] acquireMachinesLock for default-k8s-diff-port-231000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:40.616334    6877 start.go:364] duration metric: took 476.067416ms to acquireMachinesLock for "default-k8s-diff-port-231000"
	I0914 10:42:40.616499    6877 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:40.616530    6877 fix.go:54] fixHost starting: 
	I0914 10:42:40.617219    6877 fix.go:112] recreateIfNeeded on default-k8s-diff-port-231000: state=Stopped err=<nil>
	W0914 10:42:40.617265    6877 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:40.622758    6877 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-231000" ...
	I0914 10:42:40.636742    6877 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:40.636927    6877 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:27:b9:2d:02:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:40.647496    6877 main.go:141] libmachine: STDOUT: 
	I0914 10:42:40.647568    6877 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:40.647714    6877 fix.go:56] duration metric: took 31.187125ms for fixHost
	I0914 10:42:40.647735    6877 start.go:83] releasing machines lock for "default-k8s-diff-port-231000", held for 31.371583ms
	W0914 10:42:40.647765    6877 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:40.647928    6877 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:40.647944    6877 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:45.649938    6877 start.go:360] acquireMachinesLock for default-k8s-diff-port-231000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:45.650360    6877 start.go:364] duration metric: took 346.875µs to acquireMachinesLock for "default-k8s-diff-port-231000"
	I0914 10:42:45.650486    6877 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:45.650510    6877 fix.go:54] fixHost starting: 
	I0914 10:42:45.651292    6877 fix.go:112] recreateIfNeeded on default-k8s-diff-port-231000: state=Stopped err=<nil>
	W0914 10:42:45.651318    6877 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:45.660852    6877 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-231000" ...
	I0914 10:42:45.663888    6877 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:45.664087    6877 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:27:b9:2d:02:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/default-k8s-diff-port-231000/disk.qcow2
	I0914 10:42:45.673666    6877 main.go:141] libmachine: STDOUT: 
	I0914 10:42:45.673767    6877 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:45.673873    6877 fix.go:56] duration metric: took 23.365042ms for fixHost
	I0914 10:42:45.673898    6877 start.go:83] releasing machines lock for "default-k8s-diff-port-231000", held for 23.515833ms
	W0914 10:42:45.674072    6877 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-231000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-231000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:45.684817    6877 out.go:201] 
	W0914 10:42:45.688934    6877 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:45.688970    6877 out.go:270] * 
	* 
	W0914 10:42:45.691509    6877 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:45.697851    6877 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-231000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (51.434875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-231000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (34.9055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-231000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-231000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-231000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.719334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-231000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-231000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (34.336458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-231000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (30.805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-231000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-231000 --alsologtostderr -v=1: exit status 83 (43.812709ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-231000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-231000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:45.981883    6900 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:45.982031    6900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:45.982035    6900 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:45.982037    6900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:45.982196    6900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:45.982420    6900 out.go:352] Setting JSON to false
	I0914 10:42:45.982426    6900 mustload.go:65] Loading cluster: default-k8s-diff-port-231000
	I0914 10:42:45.982642    6900 config.go:182] Loaded profile config "default-k8s-diff-port-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:45.986768    6900 out.go:177] * The control-plane node default-k8s-diff-port-231000 host is not running: state=Stopped
	I0914 10:42:45.990841    6900 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-231000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-231000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (30.208041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (29.423834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-231000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.184874416s)

                                                
                                                
-- stdout --
	* [newest-cni-566000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-566000" primary control-plane node in "newest-cni-566000" cluster
	* Restarting existing qemu2 VM for "newest-cni-566000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-566000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:51.811739    6946 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:51.811876    6946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:51.811880    6946 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:51.811883    6946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:51.812006    6946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:51.813073    6946 out.go:352] Setting JSON to false
	I0914 10:42:51.829366    6946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4334,"bootTime":1726331437,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:42:51.829427    6946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:42:51.833746    6946 out.go:177] * [newest-cni-566000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:42:51.840734    6946 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:42:51.840784    6946 notify.go:220] Checking for updates...
	I0914 10:42:51.847787    6946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:42:51.850785    6946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:42:51.853756    6946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:42:51.856757    6946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:42:51.859855    6946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:42:51.862982    6946 config.go:182] Loaded profile config "newest-cni-566000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:51.863242    6946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:42:51.867775    6946 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:42:51.874778    6946 start.go:297] selected driver: qemu2
	I0914 10:42:51.874788    6946 start.go:901] validating driver "qemu2" against &{Name:newest-cni-566000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-566000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:51.874837    6946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:42:51.877212    6946 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 10:42:51.877246    6946 cni.go:84] Creating CNI manager for ""
	I0914 10:42:51.877265    6946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 10:42:51.877286    6946 start.go:340] cluster config:
	{Name:newest-cni-566000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-566000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:42:51.880968    6946 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 10:42:51.888664    6946 out.go:177] * Starting "newest-cni-566000" primary control-plane node in "newest-cni-566000" cluster
	I0914 10:42:51.892809    6946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 10:42:51.892828    6946 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 10:42:51.892837    6946 cache.go:56] Caching tarball of preloaded images
	I0914 10:42:51.892908    6946 preload.go:172] Found /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 10:42:51.892917    6946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 10:42:51.892988    6946 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/newest-cni-566000/config.json ...
	I0914 10:42:51.893458    6946 start.go:360] acquireMachinesLock for newest-cni-566000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:51.893493    6946 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "newest-cni-566000"
	I0914 10:42:51.893511    6946 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:51.893517    6946 fix.go:54] fixHost starting: 
	I0914 10:42:51.893642    6946 fix.go:112] recreateIfNeeded on newest-cni-566000: state=Stopped err=<nil>
	W0914 10:42:51.893651    6946 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:51.896697    6946 out.go:177] * Restarting existing qemu2 VM for "newest-cni-566000" ...
	I0914 10:42:51.904786    6946 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:51.904831    6946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:39:be:a5:74:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:51.907001    6946 main.go:141] libmachine: STDOUT: 
	I0914 10:42:51.907019    6946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:51.907050    6946 fix.go:56] duration metric: took 13.534ms for fixHost
	I0914 10:42:51.907056    6946 start.go:83] releasing machines lock for "newest-cni-566000", held for 13.558917ms
	W0914 10:42:51.907062    6946 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:51.907102    6946 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:51.907107    6946 start.go:729] Will try again in 5 seconds ...
	I0914 10:42:56.909108    6946 start.go:360] acquireMachinesLock for newest-cni-566000: {Name:mk3755b4300f547653a71a0686083053e04fcd63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 10:42:56.909475    6946 start.go:364] duration metric: took 284.791µs to acquireMachinesLock for "newest-cni-566000"
	I0914 10:42:56.909614    6946 start.go:96] Skipping create...Using existing machine configuration
	I0914 10:42:56.909632    6946 fix.go:54] fixHost starting: 
	I0914 10:42:56.910295    6946 fix.go:112] recreateIfNeeded on newest-cni-566000: state=Stopped err=<nil>
	W0914 10:42:56.910320    6946 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 10:42:56.919936    6946 out.go:177] * Restarting existing qemu2 VM for "newest-cni-566000" ...
	I0914 10:42:56.923822    6946 qemu.go:418] Using hvf for hardware acceleration
	I0914 10:42:56.924079    6946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:39:be:a5:74:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19643-1079/.minikube/machines/newest-cni-566000/disk.qcow2
	I0914 10:42:56.932903    6946 main.go:141] libmachine: STDOUT: 
	I0914 10:42:56.932971    6946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 10:42:56.933045    6946 fix.go:56] duration metric: took 23.412875ms for fixHost
	I0914 10:42:56.933070    6946 start.go:83] releasing machines lock for "newest-cni-566000", held for 23.572958ms
	W0914 10:42:56.933256    6946 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-566000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-566000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 10:42:56.940911    6946 out.go:201] 
	W0914 10:42:56.944924    6946 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 10:42:56.944952    6946 out.go:270] * 
	* 
	W0914 10:42:56.947404    6946 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 10:42:56.954897    6946 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-566000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (69.008584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-566000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-566000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (30.524958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-566000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-566000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-566000 --alsologtostderr -v=1: exit status 83 (43.32275ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-566000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-566000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:42:57.140396    6960 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:42:57.140552    6960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:57.140555    6960 out.go:358] Setting ErrFile to fd 2...
	I0914 10:42:57.140558    6960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:42:57.140686    6960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:42:57.140903    6960 out.go:352] Setting JSON to false
	I0914 10:42:57.140908    6960 mustload.go:65] Loading cluster: newest-cni-566000
	I0914 10:42:57.141123    6960 config.go:182] Loaded profile config "newest-cni-566000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:42:57.145885    6960 out.go:177] * The control-plane node newest-cni-566000 host is not running: state=Stopped
	I0914 10:42:57.149877    6960 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-566000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-566000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (30.429875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-566000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (30.455166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-566000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.12
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.1
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 201.54
29 TestAddons/serial/Volcano 41.31
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.26
35 TestAddons/parallel/InspektorGadget 10.33
36 TestAddons/parallel/MetricsServer 6.27
39 TestAddons/parallel/CSI 31.55
40 TestAddons/parallel/Headlamp 16.63
41 TestAddons/parallel/CloudSpanner 5.23
42 TestAddons/parallel/LocalPath 53.25
43 TestAddons/parallel/NvidiaDevicePlugin 6.18
44 TestAddons/parallel/Yakd 10.3
45 TestAddons/StoppedEnableDisable 9.41
53 TestHyperKitDriverInstallOrUpdate 10.32
56 TestErrorSpam/setup 33.58
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.64
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 64.26
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.49
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 57.03
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.58
73 TestFunctional/serial/CacheCmd/cache/add_local 1.61
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.63
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 1.86
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 38.95
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.65
84 TestFunctional/serial/LogsFileCmd 0.6
85 TestFunctional/serial/InvalidService 4.38
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 12.65
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.77
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.51
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.37
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.16
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.06
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.83
119 TestFunctional/parallel/ImageCommands/Setup 1.78
120 TestFunctional/parallel/DockerEnv/bash 0.3
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.65
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
137 TestFunctional/parallel/ServiceCmd/List 0.11
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.09
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.15
152 TestFunctional/parallel/MountCmd/specific-port 0.96
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.05
154 TestFunctional/delete_echo-server_images 0.06
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.08
161 TestMultiControlPlane/serial/DeployApp 4.42
162 TestMultiControlPlane/serial/PingHostFromPods 0.75
163 TestMultiControlPlane/serial/AddWorkerNode 52.78
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.3
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.13
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.77
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.43
277 TestNoKubernetes/serial/Stop 2.08
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
294 TestStartStop/group/old-k8s-version/serial/Stop 3.25
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
307 TestStartStop/group/no-preload/serial/Stop 4
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/embed-certs/serial/Stop 3.2
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.98
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.45
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-612000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-612000: exit status 85 (97.306792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:42 PDT |          |
	|         | -p download-only-612000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 09:42:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 09:42:53.948923    1605 out.go:345] Setting OutFile to fd 1 ...
	I0914 09:42:53.949092    1605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:42:53.949095    1605 out.go:358] Setting ErrFile to fd 2...
	I0914 09:42:53.949097    1605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:42:53.949226    1605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	W0914 09:42:53.949315    1605 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19643-1079/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19643-1079/.minikube/config/config.json: no such file or directory
	I0914 09:42:53.950552    1605 out.go:352] Setting JSON to true
	I0914 09:42:53.968000    1605 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":736,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 09:42:53.968076    1605 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 09:42:53.974437    1605 out.go:97] [download-only-612000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 09:42:53.974593    1605 notify.go:220] Checking for updates...
	W0914 09:42:53.974679    1605 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 09:42:53.978468    1605 out.go:169] MINIKUBE_LOCATION=19643
	I0914 09:42:53.985577    1605 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:42:53.989337    1605 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 09:42:53.992439    1605 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 09:42:53.995500    1605 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	W0914 09:42:54.001360    1605 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 09:42:54.001565    1605 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 09:42:54.006459    1605 out.go:97] Using the qemu2 driver based on user configuration
	I0914 09:42:54.006478    1605 start.go:297] selected driver: qemu2
	I0914 09:42:54.006492    1605 start.go:901] validating driver "qemu2" against <nil>
	I0914 09:42:54.006564    1605 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 09:42:54.009471    1605 out.go:169] Automatically selected the socket_vmnet network
	I0914 09:42:54.015156    1605 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 09:42:54.015246    1605 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 09:42:54.015293    1605 cni.go:84] Creating CNI manager for ""
	I0914 09:42:54.015325    1605 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 09:42:54.015367    1605 start.go:340] cluster config:
	{Name:download-only-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:42:54.020987    1605 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 09:42:54.028954    1605 out.go:97] Downloading VM boot image ...
	I0914 09:42:54.028969    1605 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/iso/arm64/minikube-v1.34.0-1726281733-19643-arm64.iso
	I0914 09:43:01.545499    1605 out.go:97] Starting "download-only-612000" primary control-plane node in "download-only-612000" cluster
	I0914 09:43:01.545525    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:01.601812    1605 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 09:43:01.601830    1605 cache.go:56] Caching tarball of preloaded images
	I0914 09:43:01.601973    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:01.607105    1605 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 09:43:01.607112    1605 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:01.684228    1605 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 09:43:06.741479    1605 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:06.741646    1605 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:07.438440    1605 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 09:43:07.438654    1605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-612000/config.json ...
	I0914 09:43:07.438670    1605 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-612000/config.json: {Name:mk48aad586bee83c51d5ade0281ee793bb948236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:07.438892    1605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 09:43:07.439087    1605 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0914 09:43:07.959728    1605 out.go:193] 
	W0914 09:43:07.965780    1605 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780 0x1090dd780] Decompressors:map[bz2:0x1400000eb80 gz:0x1400000eb88 tar:0x1400000eb30 tar.bz2:0x1400000eb40 tar.gz:0x1400000eb50 tar.xz:0x1400000eb60 tar.zst:0x1400000eb70 tbz2:0x1400000eb40 tgz:0x1400000eb50 txz:0x1400000eb60 tzst:0x1400000eb70 xz:0x1400000eb90 zip:0x1400000eba0 zst:0x1400000eb98] Getters:map[file:0x14000110770 http:0x1400013c0a0 https:0x1400013c0f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 09:43:07.965802    1605 out_reason.go:110] 
	W0914 09:43:07.976748    1605 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 09:43:07.980737    1605 out.go:193] 
	
	
	* The control-plane node download-only-612000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-612000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-612000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-039000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-039000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.115634541s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-039000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-039000: exit status 85 (73.816833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:42 PDT |                     |
	|         | -p download-only-612000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| delete  | -p download-only-612000        | download-only-612000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT | 14 Sep 24 09:43 PDT |
	| start   | -o=json --download-only        | download-only-039000 | jenkins | v1.34.0 | 14 Sep 24 09:43 PDT |                     |
	|         | -p download-only-039000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 09:43:08
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 09:43:08.398133    1629 out.go:345] Setting OutFile to fd 1 ...
	I0914 09:43:08.398255    1629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:08.398258    1629 out.go:358] Setting ErrFile to fd 2...
	I0914 09:43:08.398261    1629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 09:43:08.398412    1629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 09:43:08.399483    1629 out.go:352] Setting JSON to true
	I0914 09:43:08.415736    1629 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":751,"bootTime":1726331437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 09:43:08.415807    1629 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 09:43:08.419278    1629 out.go:97] [download-only-039000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 09:43:08.419383    1629 notify.go:220] Checking for updates...
	I0914 09:43:08.423074    1629 out.go:169] MINIKUBE_LOCATION=19643
	I0914 09:43:08.426183    1629 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 09:43:08.430213    1629 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 09:43:08.431726    1629 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 09:43:08.435141    1629 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	W0914 09:43:08.441173    1629 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 09:43:08.441343    1629 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 09:43:08.444076    1629 out.go:97] Using the qemu2 driver based on user configuration
	I0914 09:43:08.444085    1629 start.go:297] selected driver: qemu2
	I0914 09:43:08.444088    1629 start.go:901] validating driver "qemu2" against <nil>
	I0914 09:43:08.444133    1629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 09:43:08.447126    1629 out.go:169] Automatically selected the socket_vmnet network
	I0914 09:43:08.452339    1629 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 09:43:08.452438    1629 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 09:43:08.452461    1629 cni.go:84] Creating CNI manager for ""
	I0914 09:43:08.452485    1629 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 09:43:08.452494    1629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 09:43:08.452536    1629 start.go:340] cluster config:
	{Name:download-only-039000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 09:43:08.456125    1629 iso.go:125] acquiring lock: {Name:mk9ce8cb266895455f3a8fa26b67755853c5c1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 09:43:08.459051    1629 out.go:97] Starting "download-only-039000" primary control-plane node in "download-only-039000" cluster
	I0914 09:43:08.459057    1629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:08.514496    1629 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 09:43:08.514508    1629 cache.go:56] Caching tarball of preloaded images
	I0914 09:43:08.514645    1629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:08.518817    1629 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 09:43:08.518824    1629 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:08.588656    1629 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 09:43:12.600610    1629 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:12.600764    1629 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 09:43:13.123063    1629 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 09:43:13.123262    1629 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-039000/config.json ...
	I0914 09:43:13.123278    1629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/download-only-039000/config.json: {Name:mk73983aaddd3dde8f1e16d0fd757e0a0a38ada9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 09:43:13.123606    1629 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 09:43:13.123725    1629 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19643-1079/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-039000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-039000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-039000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-528000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-528000: exit status 85 (62.219083ms)

                                                
                                                
-- stdout --
	* Profile "addons-528000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-528000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-528000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-528000: exit status 85 (58.166416ms)

                                                
                                                
-- stdout --
	* Profile "addons-528000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-528000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (201.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-528000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-528000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m21.542962834s)
--- PASS: TestAddons/Setup (201.54s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.227833ms
addons_test.go:905: volcano-admission stabilized in 7.282167ms
addons_test.go:897: volcano-scheduler stabilized in 7.320417ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dlvg4" [a6c42ec0-075f-4853-b3d9-81b9ed2195b3] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005703125s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ntcbp" [84eaedba-dc26-499e-b40d-5154e36e8b0c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004513875s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mx695" [b961296b-614a-4d5e-a681-3b8b1312f6c3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008373416s
addons_test.go:932: (dbg) Run:  kubectl --context addons-528000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-528000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-528000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [743e047c-c766-4cee-9290-9ac28c268870] Pending
helpers_test.go:344: "test-job-nginx-0" [743e047c-c766-4cee-9290-9ac28c268870] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [743e047c-c766-4cee-9290-9ac28c268870] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.010616s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable volcano --alsologtostderr -v=1: (10.040274708s)
--- PASS: TestAddons/serial/Volcano (41.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-528000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-528000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-528000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-528000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-528000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c7da540f-164a-4b93-bd7f-e54e62053a59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c7da540f-164a-4b93-bd7f-e54e62053a59] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.0085495s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-528000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable ingress --alsologtostderr -v=1: (7.255930458s)
--- PASS: TestAddons/parallel/Ingress (18.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zxdlq" [02f67fbd-a42a-46f8-bb91-252ea99ccb7b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004078541s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-528000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-528000: (5.323447459s)
--- PASS: TestAddons/parallel/InspektorGadget (10.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.360208ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-x9frd" [c949acd4-0638-409f-9b76-862d5121cb75] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006346875s
addons_test.go:417: (dbg) Run:  kubectl --context addons-528000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.163083ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-528000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-528000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d9a4b2e-7375-4185-b797-7461042b7d3b] Pending
helpers_test.go:344: "task-pv-pod" [3d9a4b2e-7375-4185-b797-7461042b7d3b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d9a4b2e-7375-4185-b797-7461042b7d3b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.008650792s
addons_test.go:590: (dbg) Run:  kubectl --context addons-528000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-528000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-528000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-528000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-528000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-528000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-528000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e88e3faf-ca85-4f8c-83f9-ed156e4cd3ad] Pending
helpers_test.go:344: "task-pv-pod-restore" [e88e3faf-ca85-4f8c-83f9-ed156e4cd3ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e88e3faf-ca85-4f8c-83f9-ed156e4cd3ad] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003815375s
addons_test.go:632: (dbg) Run:  kubectl --context addons-528000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-528000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-528000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.148501792s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (31.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-528000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-mqdhk" [351dc21e-88ee-4e29-afc7-eaba2e1cc9b1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-mqdhk" [351dc21e-88ee-4e29-afc7-eaba2e1cc9b1] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005826792s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable headlamp --alsologtostderr -v=1: (5.283126208s)
--- PASS: TestAddons/parallel/Headlamp (16.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-6f6d2" [72200ec3-1e21-41ca-b25f-d2d1eaa408e4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012151917s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-528000
--- PASS: TestAddons/parallel/CloudSpanner (5.23s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-528000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-528000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a8aefb98-e923-4801-b6b1-17cd36b5117a] Pending
helpers_test.go:344: "test-local-path" [a8aefb98-e923-4801-b6b1-17cd36b5117a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a8aefb98-e923-4801-b6b1-17cd36b5117a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a8aefb98-e923-4801-b6b1-17cd36b5117a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.010842917s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-528000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 ssh "cat /opt/local-path-provisioner/pvc-40cba7ad-1232-4eba-80ce-dc029efeb173_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-528000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-528000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.705862917s)
--- PASS: TestAddons/parallel/LocalPath (53.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hq9wm" [0ce5bcbf-d7fd-485b-8a99-7ada928efe6a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008266958s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-528000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8rqjj" [c1bfacbf-ef81-401a-80bf-cd814373f7f8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010168458s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-528000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-528000 addons disable yakd --alsologtostderr -v=1: (5.29181925s)
--- PASS: TestAddons/parallel/Yakd (10.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-528000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-528000: (9.218572209s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-528000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-528000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-528000
--- PASS: TestAddons/StoppedEnableDisable (9.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                    
x
+
TestErrorSpam/setup (33.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-374000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-374000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 --driver=qemu2 : (33.58357575s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (33.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop: (12.198334875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop: (26.018536875s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-374000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-374000 stop: (26.039106083s)
--- PASS: TestErrorSpam/stop (64.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19643-1079/.minikube/files/etc/test/nested/copy/1603/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-855000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.490519583s)
--- PASS: TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (57.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-855000 --alsologtostderr -v=8: (57.033526542s)
functional_test.go:663: soft start took 57.033950458s for "functional-855000" cluster.
--- PASS: TestFunctional/serial/SoftStart (57.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-855000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-855000 cache add registry.k8s.io/pause:3.1: (1.018155792s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local363060736/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache add minikube-local-cache-test:functional-855000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-855000 cache add minikube-local-cache-test:functional-855000: (1.292089375s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache delete minikube-local-cache-test:functional-855000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-855000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.22925ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 kubectl -- --context functional-855000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-855000 kubectl -- --context functional-855000 get pods: (1.864723542s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-855000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-855000 get pods: (1.0161655s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0914 10:01:36.740249    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:36.750420    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:36.763283    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:36.786729    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:36.830020    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:36.912902    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:37.076594    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:37.400179    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:38.043782    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:01:39.327471    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-855000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.952421791s)
functional_test.go:761: restart took 38.952503916s for "functional-855000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-855000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd277960136/001/logs.txt
E0914 10:01:41.889111    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-855000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-855000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-855000: exit status 115 (129.995667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32335 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-855000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-855000 delete -f testdata/invalidsvc.yaml: (1.149494583s)
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 config get cpus: exit status 14 (32.97125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 config get cpus: exit status 14 (32.482791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-855000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-855000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2820: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-855000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.079709ms)

                                                
                                                
-- stdout --
	* [functional-855000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:02:31.960328    2796 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:02:31.960457    2796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:02:31.960462    2796 out.go:358] Setting ErrFile to fd 2...
	I0914 10:02:31.960464    2796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:02:31.960591    2796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:02:31.961595    2796 out.go:352] Setting JSON to false
	I0914 10:02:31.979955    2796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1914,"bootTime":1726331437,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:02:31.980027    2796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:02:31.990746    2796 out.go:177] * [functional-855000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0914 10:02:31.996674    2796 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:02:31.996687    2796 notify.go:220] Checking for updates...
	I0914 10:02:32.002800    2796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:02:32.004214    2796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:02:32.007769    2796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:02:32.010819    2796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:02:32.011974    2796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:02:32.015165    2796 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:02:32.015467    2796 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:02:32.019804    2796 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 10:02:32.023932    2796 start.go:297] selected driver: qemu2
	I0914 10:02:32.023940    2796 start.go:901] validating driver "qemu2" against &{Name:functional-855000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-855000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:02:32.024010    2796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:02:32.030722    2796 out.go:201] 
	W0914 10:02:32.034724    2796 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 10:02:32.038632    2796 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-855000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-855000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (149.69775ms)

                                                
                                                
-- stdout --
	* [functional-855000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 10:02:32.215506    2808 out.go:345] Setting OutFile to fd 1 ...
	I0914 10:02:32.215630    2808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:02:32.215634    2808 out.go:358] Setting ErrFile to fd 2...
	I0914 10:02:32.215636    2808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 10:02:32.215771    2808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
	I0914 10:02:32.217203    2808 out.go:352] Setting JSON to false
	I0914 10:02:32.235906    2808 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1915,"bootTime":1726331437,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 10:02:32.236031    2808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0914 10:02:32.240826    2808 out.go:177] * [functional-855000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0914 10:02:32.247766    2808 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 10:02:32.247864    2808 notify.go:220] Checking for updates...
	I0914 10:02:32.254669    2808 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	I0914 10:02:32.265736    2808 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 10:02:32.272665    2808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 10:02:32.282634    2808 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	I0914 10:02:32.285754    2808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 10:02:32.289085    2808 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 10:02:32.289337    2808 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 10:02:32.295773    2808 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0914 10:02:32.306772    2808 start.go:297] selected driver: qemu2
	I0914 10:02:32.306777    2808 start.go:901] validating driver "qemu2" against &{Name:functional-855000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-855000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 10:02:32.306836    2808 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 10:02:32.321735    2808 out.go:201] 
	W0914 10:02:32.328798    2808 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 10:02:32.332776    2808 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4244fcf7-9915-462f-92e2-f8fc97fcfaad] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003783375s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-855000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-855000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-855000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-855000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c3c0c4ec-2ac3-4bd2-8264-58f4c3ff5be7] Pending
helpers_test.go:344: "sp-pod" [c3c0c4ec-2ac3-4bd2-8264-58f4c3ff5be7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c3c0c4ec-2ac3-4bd2-8264-58f4c3ff5be7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010719459s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-855000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-855000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-855000 delete -f testdata/storage-provisioner/pod.yaml: (1.241801083s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-855000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [994599e7-cc13-49dc-95c2-496cde65b686] Pending
E0914 10:02:17.736941    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [994599e7-cc13-49dc-95c2-496cde65b686] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [994599e7-cc13-49dc-95c2-496cde65b686] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0111825s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-855000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh -n functional-855000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cp functional-855000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd972777456/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh -n functional-855000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh -n functional-855000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1603/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /etc/test/nested/copy/1603/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1603.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /etc/ssl/certs/1603.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1603.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /usr/share/ca-certificates/1603.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /etc/ssl/certs/16032.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /usr/share/ca-certificates/16032.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-855000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo systemctl is-active crio"
E0914 10:01:47.011017    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh "sudo systemctl is-active crio": exit status 1 (96.291917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-855000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-855000
docker.io/kicbase/echo-server:functional-855000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-855000 image ls --format short --alsologtostderr:
I0914 10:02:33.280960    2829 out.go:345] Setting OutFile to fd 1 ...
I0914 10:02:33.281144    2829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.281147    2829 out.go:358] Setting ErrFile to fd 2...
I0914 10:02:33.281150    2829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.281309    2829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:02:33.281878    2829 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.281945    2829 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.282916    2829 ssh_runner.go:195] Run: systemctl --version
I0914 10:02:33.282925    2829 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/functional-855000/id_rsa Username:docker}
I0914 10:02:33.304902    2829 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-855000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-855000 | ce2d2cda2d858 | 4.78MB |
| docker.io/library/minikube-local-cache-test | functional-855000 | 8bb93ee3faf13 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| localhost/my-image                          | functional-855000 | d220d18a85073 | 1.41MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-855000 image ls --format table --alsologtostderr:
I0914 10:02:35.314919    2841 out.go:345] Setting OutFile to fd 1 ...
I0914 10:02:35.315063    2841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:35.315067    2841 out.go:358] Setting ErrFile to fd 2...
I0914 10:02:35.315069    2841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:35.315194    2841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:02:35.315603    2841 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:35.315668    2841 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:35.316514    2841 ssh_runner.go:195] Run: systemctl --version
I0914 10:02:35.316524    2841 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/functional-855000/id_rsa Username:docker}
I0914 10:02:35.339404    2841 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/14 10:02:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-855000 image ls --format json --alsologtostderr:
[{"id":"8bb93ee3faf13692376d962a5c97ca655b4d444f9c00a5d4a069b84e423d60fe","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-855000"],"size":"30"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-855000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d220d18a85073332437605641d823f3a8f8474a31c307d4
e73fb24fd001efca7","repoDigests":[],"repoTags":["localhost/my-image:functional-855000"],"size":"1410000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.i
o/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-855000 image ls --format json --alsologtostderr:
I0914 10:02:35.248994    2839 out.go:345] Setting OutFile to fd 1 ...
I0914 10:02:35.249142    2839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:35.249146    2839 out.go:358] Setting ErrFile to fd 2...
I0914 10:02:35.249148    2839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:35.249282    2839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:02:35.249743    2839 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:35.249817    2839 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:35.250601    2839 ssh_runner.go:195] Run: systemctl --version
I0914 10:02:35.250610    2839 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/functional-855000/id_rsa Username:docker}
I0914 10:02:35.271335    2839 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-855000 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-855000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8bb93ee3faf13692376d962a5c97ca655b4d444f9c00a5d4a069b84e423d60fe
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-855000
size: "30"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-855000 image ls --format yaml --alsologtostderr:
I0914 10:02:33.347281    2831 out.go:345] Setting OutFile to fd 1 ...
I0914 10:02:33.347438    2831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.347442    2831 out.go:358] Setting ErrFile to fd 2...
I0914 10:02:33.347444    2831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.347562    2831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:02:33.348009    2831 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.348073    2831 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.348971    2831 ssh_runner.go:195] Run: systemctl --version
I0914 10:02:33.348982    2831 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/functional-855000/id_rsa Username:docker}
I0914 10:02:33.369597    2831 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh pgrep buildkitd: exit status 1 (54.202208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image build -t localhost/my-image:functional-855000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-855000 image build -t localhost/my-image:functional-855000 testdata/build --alsologtostderr: (1.712823167s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-855000 image build -t localhost/my-image:functional-855000 testdata/build --alsologtostderr:
I0914 10:02:33.466783    2835 out.go:345] Setting OutFile to fd 1 ...
I0914 10:02:33.466984    2835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.466988    2835 out.go:358] Setting ErrFile to fd 2...
I0914 10:02:33.466990    2835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 10:02:33.467107    2835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19643-1079/.minikube/bin
I0914 10:02:33.467590    2835 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.468361    2835 config.go:182] Loaded profile config "functional-855000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 10:02:33.469238    2835 ssh_runner.go:195] Run: systemctl --version
I0914 10:02:33.469249    2835 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19643-1079/.minikube/machines/functional-855000/id_rsa Username:docker}
I0914 10:02:33.492126    2835 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.405097151.tar
I0914 10:02:33.492179    2835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 10:02:33.495874    2835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.405097151.tar
I0914 10:02:33.497797    2835 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.405097151.tar: stat -c "%s %y" /var/lib/minikube/build/build.405097151.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.405097151.tar': No such file or directory
I0914 10:02:33.497817    2835 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.405097151.tar --> /var/lib/minikube/build/build.405097151.tar (3072 bytes)
I0914 10:02:33.507034    2835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.405097151
I0914 10:02:33.510378    2835 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.405097151 -xf /var/lib/minikube/build/build.405097151.tar
I0914 10:02:33.513720    2835 docker.go:360] Building image: /var/lib/minikube/build/build.405097151
I0914 10:02:33.513796    2835 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-855000 /var/lib/minikube/build/build.405097151
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d220d18a85073332437605641d823f3a8f8474a31c307d4e73fb24fd001efca7 done
#8 naming to localhost/my-image:functional-855000 done
#8 DONE 0.0s
I0914 10:02:35.136474    2835 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-855000 /var/lib/minikube/build/build.405097151: (1.622718458s)
I0914 10:02:35.136559    2835 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.405097151
I0914 10:02:35.140886    2835 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.405097151.tar
I0914 10:02:35.144225    2835 build_images.go:217] Built localhost/my-image:functional-855000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.405097151.tar
I0914 10:02:35.144239    2835 build_images.go:133] succeeded building to: functional-855000
I0914 10:02:35.144242    2835 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.763537s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-855000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-855000 docker-env) && out/minikube-darwin-arm64 status -p functional-855000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-855000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-855000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-855000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-ppc4j" [9727cc52-2450-4742-93a6-757c76565324] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-ppc4j" [9727cc52-2450-4742-93a6-757c76565324] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0914 10:01:57.254091    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.008761584s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image load --daemon kicbase/echo-server:functional-855000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image load --daemon kicbase/echo-server:functional-855000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-855000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image load --daemon kicbase/echo-server:functional-855000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image save kicbase/echo-server:functional-855000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image rm kicbase/echo-server:functional-855000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-855000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 image save --daemon kicbase/echo-server:functional-855000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-855000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2654: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-855000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c7c03866-4f42-4229-8820-e3c7ff530faa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c7c03866-4f42-4229-8820-e3c7ff530faa] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00796625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service list -o json
functional_test.go:1494: Took "79.966541ms" to run "out/minikube-darwin-arm64 -p functional-855000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32483
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32483
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-855000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.228.85 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-855000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "82.966084ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.665958ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "80.56825ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "36.5095ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726333344768753000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726333344768753000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726333344768753000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001/test-1726333344768753000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.110625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 test-1726333344768753000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh cat /mount-9p/test-1726333344768753000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-855000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [37c45988-666f-4e1a-81ef-e9c9f9ca9988] Pending
helpers_test.go:344: "busybox-mount" [37c45988-666f-4e1a-81ef-e9c9f9ca9988] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [37c45988-666f-4e1a-81ef-e9c9f9ca9988] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [37c45988-666f-4e1a-81ef-e9c9f9ca9988] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003908292s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-855000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2937196915/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3386248699/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.933458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3386248699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh "sudo umount -f /mount-9p": exit status 1 (59.098292ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-855000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3386248699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T" /mount1: exit status 1 (65.504959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-855000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-855000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-855000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1516516861/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-855000
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-855000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-855000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-258000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0914 10:02:58.699423    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
E0914 10:04:20.619856    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-258000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m57.888345458s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-258000 -- rollout status deployment/busybox: (2.927129625s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-bk6dc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-wqv52 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-zjp2j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-bk6dc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-wqv52 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-zjp2j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-bk6dc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-wqv52 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-zjp2j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-bk6dc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-bk6dc -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-wqv52 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-wqv52 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-zjp2j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-258000 -- exec busybox-7dff88458-zjp2j -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-258000 -v=7 --alsologtostderr
E0914 10:06:36.727712    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/addons-528000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-258000 -v=7 --alsologtostderr: (52.563164959s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-258000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp testdata/cp-test.txt ha-258000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile309057738/001/cp-test_ha-258000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000:/home/docker/cp-test.txt ha-258000-m02:/home/docker/cp-test_ha-258000_ha-258000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test_ha-258000_ha-258000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000:/home/docker/cp-test.txt ha-258000-m03:/home/docker/cp-test_ha-258000_ha-258000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test_ha-258000_ha-258000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000:/home/docker/cp-test.txt ha-258000-m04:/home/docker/cp-test_ha-258000_ha-258000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test_ha-258000_ha-258000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp testdata/cp-test.txt ha-258000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile309057738/001/cp-test_ha-258000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m02:/home/docker/cp-test.txt ha-258000:/home/docker/cp-test_ha-258000-m02_ha-258000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test_ha-258000-m02_ha-258000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m02:/home/docker/cp-test.txt ha-258000-m03:/home/docker/cp-test_ha-258000-m02_ha-258000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test_ha-258000-m02_ha-258000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m02:/home/docker/cp-test.txt ha-258000-m04:/home/docker/cp-test_ha-258000-m02_ha-258000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test_ha-258000-m02_ha-258000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp testdata/cp-test.txt ha-258000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile309057738/001/cp-test_ha-258000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m03:/home/docker/cp-test.txt ha-258000:/home/docker/cp-test_ha-258000-m03_ha-258000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test_ha-258000-m03_ha-258000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m03:/home/docker/cp-test.txt ha-258000-m02:/home/docker/cp-test_ha-258000-m03_ha-258000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test_ha-258000-m03_ha-258000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m03:/home/docker/cp-test.txt ha-258000-m04:/home/docker/cp-test_ha-258000-m03_ha-258000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test_ha-258000-m03_ha-258000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp testdata/cp-test.txt ha-258000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile309057738/001/cp-test_ha-258000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m04:/home/docker/cp-test.txt ha-258000:/home/docker/cp-test_ha-258000-m04_ha-258000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000 "sudo cat /home/docker/cp-test_ha-258000-m04_ha-258000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m04:/home/docker/cp-test.txt ha-258000-m02:/home/docker/cp-test_ha-258000-m04_ha-258000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m02 "sudo cat /home/docker/cp-test_ha-258000-m04_ha-258000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 cp ha-258000-m04:/home/docker/cp-test.txt ha-258000-m03:/home/docker/cp-test_ha-258000-m04_ha-258000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-258000 ssh -n ha-258000-m03 "sudo cat /home/docker/cp-test_ha-258000-m04_ha-258000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.129239209s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-097000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-097000 --output=json --user=testUser: (1.766460166s)
--- PASS: TestJSONOutput/stop/Command (1.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-811000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-811000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.876916ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3dd9190c-31fb-46ed-b81a-0400e817ae20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-811000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6229ee2a-a6bd-44d6-8887-50d9c680b919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"ec3aafcd-3be9-4cc6-a496-4145be5ab9c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig"}}
	{"specversion":"1.0","id":"7f86dd3c-ea6b-4e60-86f3-6e21c94484e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0bc31213-4a0e-4a4f-b06f-ae09677015cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4da9d7b3-a0cd-478b-b212-f0ae75255f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube"}}
	{"specversion":"1.0","id":"530c286b-1591-4eac-947f-b68fa1278301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5abd78a2-bb97-427b-a23e-ebf29bd3440e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-811000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-993000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.616542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-993000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19643-1079/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19643-1079/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-993000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-993000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.615166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-993000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.626632084s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0914 10:39:50.917018    1603 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19643-1079/.minikube/profiles/functional-855000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.799957792s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-993000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-993000: (2.079030209s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-993000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-993000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.522125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-993000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-130000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-661000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-661000 --alsologtostderr -v=3: (3.248000916s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-661000 -n old-k8s-version-661000: exit status 7 (30.069375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-661000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-835000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-835000 --alsologtostderr -v=3: (4.001472208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (4.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-835000 -n no-preload-835000: exit status 7 (53.246417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-835000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-486000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-486000 --alsologtostderr -v=3: (3.201197834s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-486000 -n embed-certs-486000: exit status 7 (59.161042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-486000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-231000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-231000 --alsologtostderr -v=3: (1.9752165s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-231000 -n default-k8s-diff-port-231000: exit status 7 (62.198083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-231000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-566000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-566000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-566000 --alsologtostderr -v=3: (3.4470385s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-566000 -n newest-cni-566000: exit status 7 (68.789416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-566000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-029000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-029000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-029000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-029000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-029000"

                                                
                                                
----------------------- debugLogs end: cilium-029000 [took: 2.20956925s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-029000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-248000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard