Test Report: QEMU_macOS 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.42
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.25
22 TestOffline 10.04
33 TestAddons/parallel/Registry 71.32
46 TestCertOptions 10.16
47 TestCertExpiration 195.44
48 TestDockerFlags 10.14
49 TestForceSystemdFlag 10.05
50 TestForceSystemdEnv 12.42
95 TestFunctional/parallel/ServiceCmdConnect 29.81
167 TestMultiControlPlane/serial/StopSecondaryNode 214.17
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.91
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.88
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.4
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.05
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.38
184 TestJSONOutput/start/Command 9.9
190 TestJSONOutput/pause/Command 0.09
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.09
216 TestMountStart/serial/StartWithMountFirst 10.04
219 TestMultiNode/serial/FreshStart2Nodes 9.96
220 TestMultiNode/serial/DeployApp2Nodes 114.2
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 44.86
228 TestMultiNode/serial/RestartKeepsNodes 9.25
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.08
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 19.97
236 TestPreload 9.98
238 TestScheduledStopUnix 10.14
239 TestSkaffold 12.79
242 TestRunningBinaryUpgrade 594.65
244 TestKubernetesUpgrade 17.22
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.22
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.84
260 TestStoppedBinaryUpgrade/Upgrade 585.97
262 TestPause/serial/Start 10.17
272 TestNoKubernetes/serial/StartWithK8s 9.98
273 TestNoKubernetes/serial/StartWithStopK8s 5.29
274 TestNoKubernetes/serial/Start 5.3
278 TestNoKubernetes/serial/StartNoArgs 5.32
280 TestNetworkPlugins/group/auto/Start 9.86
281 TestNetworkPlugins/group/kindnet/Start 9.91
282 TestNetworkPlugins/group/calico/Start 9.99
283 TestNetworkPlugins/group/custom-flannel/Start 9.74
284 TestNetworkPlugins/group/false/Start 9.84
285 TestNetworkPlugins/group/enable-default-cni/Start 9.87
286 TestNetworkPlugins/group/flannel/Start 9.91
287 TestNetworkPlugins/group/bridge/Start 9.73
288 TestNetworkPlugins/group/kubenet/Start 9.89
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.87
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.94
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.26
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/no-preload/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 10.01
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.87
316 TestStartStop/group/embed-certs/serial/DeployApp 0.09
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
320 TestStartStop/group/embed-certs/serial/SecondStart 5.86
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 10.04
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-091000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-091000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.416496167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"42114665-2620-40e8-82c0-8b3d38f25ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-091000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d27f861-805a-40c7-a468-004bd77d9121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"4a86b6a1-57f9-4d1b-b2c5-c153aba5abb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig"}}
	{"specversion":"1.0","id":"c6771a38-16a3-4a7e-9e59-d0c3cb86b77d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dcaf7d2f-fdc0-41b9-bd01-3be35f5005f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6dedaa95-bdcc-4c04-b279-1138dc22c085","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube"}}
	{"specversion":"1.0","id":"9310c350-e115-4426-a19f-91cdacd099f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"faecf1ed-9da2-4b3e-ad76-269b2d2be1ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"994d6438-9e8e-41aa-acf9-95759e4ce94d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1b6f2103-53de-4f18-86b7-d05fec38c80a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"10d6f949-d3c5-4f49-afaf-131e864524a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-091000\" primary control-plane node in \"download-only-091000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7a4f891-819c-48ca-96d7-1909c0eabfcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b56dc4de-be23-4206-825c-20425505aa34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780] Decompressors:map[bz2:0x1400055dd40 gz:0x1400055dd48 tar:0x1400055dcf0 tar.bz2:0x1400055dd00 tar.gz:0x1400055dd10 tar.xz:0x1400055dd20 tar.zst:0x1400055dd30 tbz2:0x1400055dd00 tgz:0x14
00055dd10 txz:0x1400055dd20 tzst:0x1400055dd30 xz:0x1400055dd60 zip:0x1400055dd70 zst:0x1400055dd68] Getters:map[file:0x140002017d0 http:0x14000670280 https:0x14000670320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 403","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"34a400f7-15b8-4654-8218-8c8b59a4f636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:19:44.089652    1654 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:19:44.089795    1654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:44.089799    1654 out.go:358] Setting ErrFile to fd 2...
	I0916 03:19:44.089801    1654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:44.089928    1654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	W0916 03:19:44.090020    1654 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19651-1133/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19651-1133/.minikube/config/config.json: no such file or directory
	I0916 03:19:44.091208    1654 out.go:352] Setting JSON to true
	I0916 03:19:44.108677    1654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1147,"bootTime":1726480837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:19:44.108741    1654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:19:44.115185    1654 out.go:97] [download-only-091000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:19:44.115324    1654 notify.go:220] Checking for updates...
	W0916 03:19:44.115401    1654 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 03:19:44.119191    1654 out.go:169] MINIKUBE_LOCATION=19651
	I0916 03:19:44.122162    1654 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:19:44.126205    1654 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:19:44.129152    1654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:19:44.132181    1654 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	W0916 03:19:44.138157    1654 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 03:19:44.138312    1654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:19:44.143191    1654 out.go:97] Using the qemu2 driver based on user configuration
	I0916 03:19:44.143215    1654 start.go:297] selected driver: qemu2
	I0916 03:19:44.143232    1654 start.go:901] validating driver "qemu2" against <nil>
	I0916 03:19:44.143309    1654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:19:44.146175    1654 out.go:169] Automatically selected the socket_vmnet network
	I0916 03:19:44.151865    1654 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 03:19:44.151967    1654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:19:44.152015    1654 cni.go:84] Creating CNI manager for ""
	I0916 03:19:44.152055    1654 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 03:19:44.152110    1654 start.go:340] cluster config:
	{Name:download-only-091000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-091000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:19:44.157365    1654 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:19:44.162131    1654 out.go:97] Downloading VM boot image ...
	I0916 03:19:44.162150    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso
	I0916 03:19:49.782507    1654 out.go:97] Starting "download-only-091000" primary control-plane node in "download-only-091000" cluster
	I0916 03:19:49.782534    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:49.833107    1654 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 03:19:49.833132    1654 cache.go:56] Caching tarball of preloaded images
	I0916 03:19:49.833274    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:49.837315    1654 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 03:19:49.837321    1654 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:49.908722    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 03:19:55.562965    1654 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:55.563155    1654 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:56.259093    1654 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 03:19:56.259310    1654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/download-only-091000/config.json ...
	I0916 03:19:56.259327    1654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/download-only-091000/config.json: {Name:mka9ea026540357746e2a2b0fa7705edce6bdf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:19:56.259554    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:56.259743    1654 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0916 03:19:57.426573    1654 out.go:193] 
	W0916 03:19:57.436655    1654 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780] Decompressors:map[bz2:0x1400055dd40 gz:0x1400055dd48 tar:0x1400055dcf0 tar.bz2:0x1400055dd00 tar.gz:0x1400055dd10 tar.xz:0x1400055dd20 tar.zst:0x1400055dd30 tbz2:0x1400055dd00 tgz:0x1400055dd10 txz:0x1400055dd20 tzst:0x1400055dd30 xz:0x1400055dd60 zip:0x1400055dd70 zst:0x1400055dd68] Getters:map[file:0x140002017d0 http:0x14000670280 https:0x14000670320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 403
	W0916 03:19:57.436682    1654 out_reason.go:110] 
	W0916 03:19:57.443602    1654 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:19:57.447593    1654 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-091000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.25s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-160000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-160000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (151.027375ms)

                                                
                                                
-- stdout --
	* [binary-mirror-160000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-160000" primary control-plane node in "binary-mirror-160000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:20:05.512110    1718 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:20:05.512248    1718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:05.512251    1718 out.go:358] Setting ErrFile to fd 2...
	I0916 03:20:05.512254    1718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:05.512380    1718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:20:05.513458    1718 out.go:352] Setting JSON to false
	I0916 03:20:05.529825    1718 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1168,"bootTime":1726480837,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:20:05.529896    1718 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:20:05.534160    1718 out.go:177] * [binary-mirror-160000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:20:05.540013    1718 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:20:05.540040    1718 notify.go:220] Checking for updates...
	I0916 03:20:05.546136    1718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:20:05.549092    1718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:20:05.552107    1718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:20:05.555136    1718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:20:05.556740    1718 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:20:05.561049    1718 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 03:20:05.567937    1718 start.go:297] selected driver: qemu2
	I0916 03:20:05.567945    1718 start.go:901] validating driver "qemu2" against <nil>
	I0916 03:20:05.568014    1718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:20:05.571122    1718 out.go:177] * Automatically selected the socket_vmnet network
	I0916 03:20:05.576350    1718 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 03:20:05.576447    1718 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:20:05.576466    1718 cni.go:84] Creating CNI manager for ""
	I0916 03:20:05.576497    1718 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:05.576504    1718 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 03:20:05.576552    1718 start.go:340] cluster config:
	{Name:binary-mirror-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:20:05.579913    1718 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:05.588128    1718 out.go:177] * Starting "binary-mirror-160000" primary control-plane node in "binary-mirror-160000" cluster
	I0916 03:20:05.592078    1718 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:05.592093    1718 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:20:05.592106    1718 cache.go:56] Caching tarball of preloaded images
	I0916 03:20:05.592174    1718 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 03:20:05.592180    1718 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:20:05.592369    1718 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/binary-mirror-160000/config.json ...
	I0916 03:20:05.592379    1718 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/binary-mirror-160000/config.json: {Name:mk2509ccae1f7158244c85aaa1793aa123947e2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:05.592732    1718 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:05.592783    1718 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0916 03:20:05.611125    1718 out.go:201] 
	W0916 03:20:05.615074    1718 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780] Decompressors:map[bz2:0x1400000fe70 gz:0x1400000fe78 tar:0x1400000fe10 tar.bz2:0x1400000fe20 tar.gz:0x1400000fe30 tar.xz:0x1400000fe40 tar.zst:0x1400000fe60 tbz2:0x1400000fe20 tgz:0x1400000fe30 txz:0x1400000fe40 tzst:0x1400000fe60 xz:0x1400000feb0 zip:0x1400000fec0 zst:0x1400000feb8] Getters:map[file:0x140005d5a10 http:0x14000601e00 https:0x14000601e50] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780 0x1074bd780] Decompressors:map[bz2:0x1400000fe70 gz:0x1400000fe78 tar:0x1400000fe10 tar.bz2:0x1400000fe20 tar.gz:0x1400000fe30 tar.xz:0x1400000fe40 tar.zst:0x1400000fe60 tbz2:0x1400000fe20 tgz:0x1400000fe30 txz:0x1400000fe40 tzst:0x1400000fe60 xz:0x1400000feb0 zip:0x1400000fec0 zst:0x1400000feb8] Getters:map[file:0x140005d5a10 http:0x14000601e00 https:0x14000601e50] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0916 03:20:05.615084    1718 out.go:270] * 
	* 
	W0916 03:20:05.615552    1718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:20:05.626112    1718 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-160000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-160000
--- FAIL: TestBinaryMirror (0.25s)

                                                
                                    
x
+
TestOffline (10.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-003000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-003000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.89072475s)

                                                
                                                
-- stdout --
	* [offline-docker-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-003000" primary control-plane node in "offline-docker-003000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-003000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:05:09.371551    4359 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:05:09.371694    4359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:09.371698    4359 out.go:358] Setting ErrFile to fd 2...
	I0916 04:05:09.371700    4359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:09.371833    4359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:05:09.373043    4359 out.go:352] Setting JSON to false
	I0916 04:05:09.390860    4359 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3872,"bootTime":1726480837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:05:09.390937    4359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:05:09.396768    4359 out.go:177] * [offline-docker-003000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:05:09.403980    4359 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:05:09.404018    4359 notify.go:220] Checking for updates...
	I0916 04:05:09.411886    4359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:05:09.414976    4359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:05:09.417910    4359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:05:09.420920    4359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:05:09.424051    4359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:05:09.425573    4359 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:09.425623    4359 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:05:09.429899    4359 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:05:09.436788    4359 start.go:297] selected driver: qemu2
	I0916 04:05:09.436799    4359 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:05:09.436807    4359 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:05:09.438818    4359 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:05:09.441934    4359 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:05:09.445000    4359 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:05:09.445016    4359 cni.go:84] Creating CNI manager for ""
	I0916 04:05:09.445035    4359 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:05:09.445045    4359 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:05:09.445083    4359 start.go:340] cluster config:
	{Name:offline-docker-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:05:09.448751    4359 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:05:09.455891    4359 out.go:177] * Starting "offline-docker-003000" primary control-plane node in "offline-docker-003000" cluster
	I0916 04:05:09.459975    4359 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:05:09.460005    4359 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:05:09.460017    4359 cache.go:56] Caching tarball of preloaded images
	I0916 04:05:09.460130    4359 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:05:09.460141    4359 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:05:09.460209    4359 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/offline-docker-003000/config.json ...
	I0916 04:05:09.460220    4359 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/offline-docker-003000/config.json: {Name:mk7d79df84bcfd72f4582fcfec0a2758a568292d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:05:09.460566    4359 start.go:360] acquireMachinesLock for offline-docker-003000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:09.460602    4359 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "offline-docker-003000"
	I0916 04:05:09.460613    4359 start.go:93] Provisioning new machine with config: &{Name:offline-docker-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:09.460654    4359 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:09.468932    4359 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:09.485106    4359 start.go:159] libmachine.API.Create for "offline-docker-003000" (driver="qemu2")
	I0916 04:05:09.485141    4359 client.go:168] LocalClient.Create starting
	I0916 04:05:09.485213    4359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:09.485242    4359 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:09.485252    4359 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:09.485293    4359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:09.485316    4359 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:09.485323    4359 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:09.485700    4359 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:09.647641    4359 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:09.820099    4359 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:09.820112    4359 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:09.824718    4359 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:09.840303    4359 main.go:141] libmachine: STDOUT: 
	I0916 04:05:09.840335    4359 main.go:141] libmachine: STDERR: 
	I0916 04:05:09.840433    4359 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2 +20000M
	I0916 04:05:09.852468    4359 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:09.852497    4359 main.go:141] libmachine: STDERR: 
	I0916 04:05:09.852527    4359 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:09.852534    4359 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:09.852554    4359 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:09.852593    4359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:85:4d:a7:df:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:09.854734    4359 main.go:141] libmachine: STDOUT: 
	I0916 04:05:09.854755    4359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:09.854782    4359 client.go:171] duration metric: took 369.641125ms to LocalClient.Create
	I0916 04:05:11.856894    4359 start.go:128] duration metric: took 2.3962745s to createHost
	I0916 04:05:11.856922    4359 start.go:83] releasing machines lock for "offline-docker-003000", held for 2.396362958s
	W0916 04:05:11.856948    4359 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:11.886457    4359 out.go:177] * Deleting "offline-docker-003000" in qemu2 ...
	W0916 04:05:11.908787    4359 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:11.908801    4359 start.go:729] Will try again in 5 seconds ...
	I0916 04:05:16.910779    4359 start.go:360] acquireMachinesLock for offline-docker-003000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:16.910895    4359 start.go:364] duration metric: took 98.667µs to acquireMachinesLock for "offline-docker-003000"
	I0916 04:05:16.910923    4359 start.go:93] Provisioning new machine with config: &{Name:offline-docker-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:16.910964    4359 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:16.922167    4359 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:16.937592    4359 start.go:159] libmachine.API.Create for "offline-docker-003000" (driver="qemu2")
	I0916 04:05:16.937622    4359 client.go:168] LocalClient.Create starting
	I0916 04:05:16.937694    4359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:16.937725    4359 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:16.937733    4359 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:16.937766    4359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:16.937790    4359 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:16.937799    4359 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:16.938104    4359 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:17.093981    4359 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:17.171681    4359 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:17.171691    4359 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:17.171890    4359 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:17.181108    4359 main.go:141] libmachine: STDOUT: 
	I0916 04:05:17.181124    4359 main.go:141] libmachine: STDERR: 
	I0916 04:05:17.181190    4359 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2 +20000M
	I0916 04:05:17.189168    4359 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:17.189191    4359 main.go:141] libmachine: STDERR: 
	I0916 04:05:17.189205    4359 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:17.189212    4359 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:17.189219    4359 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:17.189254    4359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:b6:f6:fc:46:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/offline-docker-003000/disk.qcow2
	I0916 04:05:17.190850    4359 main.go:141] libmachine: STDOUT: 
	I0916 04:05:17.190863    4359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:17.190877    4359 client.go:171] duration metric: took 253.256125ms to LocalClient.Create
	I0916 04:05:19.193057    4359 start.go:128] duration metric: took 2.282107s to createHost
	I0916 04:05:19.193157    4359 start.go:83] releasing machines lock for "offline-docker-003000", held for 2.282297s
	W0916 04:05:19.193524    4359 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:19.202120    4359 out.go:201] 
	W0916 04:05:19.206114    4359 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:05:19.206148    4359 out.go:270] * 
	* 
	W0916 04:05:19.208778    4359 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:05:19.219087    4359 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-003000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-16 04:05:19.23296 -0700 PDT m=+2735.254823459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-003000 -n offline-docker-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-003000 -n offline-docker-003000: exit status 7 (66.880792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-003000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-003000
--- FAIL: TestOffline (10.04s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.028375ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fwzqm" [356ec898-bcc6-438e-88a6-3e2540fbe09a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005590042s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hjk6w" [997df24f-5154-460b-ab90-cfa8f452443b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008101625s
addons_test.go:342: (dbg) Run:  kubectl --context addons-490000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-490000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-490000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.067289083s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-490000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 ip
2024/09/16 03:33:17 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-490000 -n addons-490000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |                     |
	|         | -p download-only-091000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT | 16 Sep 24 03:19 PDT |
	| delete  | -p download-only-091000              | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT | 16 Sep 24 03:19 PDT |
	| start   | -o=json --download-only              | download-only-172000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |                     |
	|         | -p download-only-172000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| delete  | -p download-only-172000              | download-only-172000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| delete  | -p download-only-091000              | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| delete  | -p download-only-172000              | download-only-172000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| start   | --download-only -p                   | binary-mirror-160000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT |                     |
	|         | binary-mirror-160000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-160000              | binary-mirror-160000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| addons  | disable dashboard -p                 | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT |                     |
	|         | addons-490000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT |                     |
	|         | addons-490000                        |                      |         |         |                     |                     |
	| start   | -p addons-490000 --wait=true         | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:23 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-490000 addons disable         | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:23 PDT | 16 Sep 24 03:24 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-490000 addons                 | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-490000 addons                 | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-490000 addons                 | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:33 PDT |
	|         | addons-490000                        |                      |         |         |                     |                     |
	| ssh     | addons-490000 ssh curl -s            | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT | 16 Sep 24 03:33 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-490000 ip                     | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT | 16 Sep 24 03:33 PDT |
	| addons  | addons-490000 addons disable         | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT | 16 Sep 24 03:33 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-490000 addons disable         | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-490000 ip                     | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT | 16 Sep 24 03:33 PDT |
	| addons  | addons-490000 addons disable         | addons-490000        | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT | 16 Sep 24 03:33 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:20:05
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:20:05.791985    1732 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:20:05.792118    1732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:05.792122    1732 out.go:358] Setting ErrFile to fd 2...
	I0916 03:20:05.792124    1732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:05.792290    1732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:20:05.793424    1732 out.go:352] Setting JSON to false
	I0916 03:20:05.809587    1732 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1168,"bootTime":1726480837,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:20:05.809661    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:20:05.814164    1732 out.go:177] * [addons-490000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:20:05.820088    1732 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:20:05.820163    1732 notify.go:220] Checking for updates...
	I0916 03:20:05.825509    1732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:20:05.829101    1732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:20:05.832152    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:20:05.835123    1732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:20:05.838080    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:20:05.841224    1732 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:20:05.845135    1732 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 03:20:05.852111    1732 start.go:297] selected driver: qemu2
	I0916 03:20:05.852117    1732 start.go:901] validating driver "qemu2" against <nil>
	I0916 03:20:05.852123    1732 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:20:05.854454    1732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:20:05.857111    1732 out.go:177] * Automatically selected the socket_vmnet network
	I0916 03:20:05.860191    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:20:05.860216    1732 cni.go:84] Creating CNI manager for ""
	I0916 03:20:05.860243    1732 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:05.860247    1732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 03:20:05.860292    1732 start.go:340] cluster config:
	{Name:addons-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:20:05.863727    1732 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:05.871903    1732 out.go:177] * Starting "addons-490000" primary control-plane node in "addons-490000" cluster
	I0916 03:20:05.876078    1732 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:05.876091    1732 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:20:05.876098    1732 cache.go:56] Caching tarball of preloaded images
	I0916 03:20:05.876149    1732 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 03:20:05.876154    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:20:05.876341    1732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/config.json ...
	I0916 03:20:05.876352    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/config.json: {Name:mk624e0374cfebd15bcc14e9888f5951d3be61d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:05.876588    1732 start.go:360] acquireMachinesLock for addons-490000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:20:05.876744    1732 start.go:364] duration metric: took 150.625µs to acquireMachinesLock for "addons-490000"
	I0916 03:20:05.876757    1732 start.go:93] Provisioning new machine with config: &{Name:addons-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:20:05.876781    1732 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 03:20:05.885062    1732 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 03:20:06.222918    1732 start.go:159] libmachine.API.Create for "addons-490000" (driver="qemu2")
	I0916 03:20:06.222967    1732 client.go:168] LocalClient.Create starting
	I0916 03:20:06.223145    1732 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 03:20:06.294544    1732 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 03:20:06.347407    1732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 03:20:06.654194    1732 main.go:141] libmachine: Creating SSH key...
	I0916 03:20:06.687899    1732 main.go:141] libmachine: Creating Disk image...
	I0916 03:20:06.687904    1732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 03:20:06.688140    1732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2
	I0916 03:20:06.707427    1732 main.go:141] libmachine: STDOUT: 
	I0916 03:20:06.707449    1732 main.go:141] libmachine: STDERR: 
	I0916 03:20:06.707529    1732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2 +20000M
	I0916 03:20:06.715446    1732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 03:20:06.715461    1732 main.go:141] libmachine: STDERR: 
	I0916 03:20:06.715475    1732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2
	I0916 03:20:06.715480    1732 main.go:141] libmachine: Starting QEMU VM...
	I0916 03:20:06.715518    1732 qemu.go:418] Using hvf for hardware acceleration
	I0916 03:20:06.715557    1732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:df:0c:2d:b0:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/disk.qcow2
	I0916 03:20:06.771742    1732 main.go:141] libmachine: STDOUT: 
	I0916 03:20:06.771768    1732 main.go:141] libmachine: STDERR: 
	I0916 03:20:06.771772    1732 main.go:141] libmachine: Attempt 0
	I0916 03:20:06.771788    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:06.771845    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:06.771864    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:08.773994    1732 main.go:141] libmachine: Attempt 1
	I0916 03:20:08.774185    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:08.774554    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:08.774605    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:10.776769    1732 main.go:141] libmachine: Attempt 2
	I0916 03:20:10.776858    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:10.777226    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:10.777273    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:12.779402    1732 main.go:141] libmachine: Attempt 3
	I0916 03:20:12.779441    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:12.779574    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:12.779585    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:14.781572    1732 main.go:141] libmachine: Attempt 4
	I0916 03:20:14.781586    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:14.781625    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:14.781631    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:16.783642    1732 main.go:141] libmachine: Attempt 5
	I0916 03:20:16.783668    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:16.783740    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:16.783754    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:18.785756    1732 main.go:141] libmachine: Attempt 6
	I0916 03:20:18.785776    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:18.785847    1732 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:18.785857    1732 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e957b3}
	I0916 03:20:20.787959    1732 main.go:141] libmachine: Attempt 7
	I0916 03:20:20.788048    1732 main.go:141] libmachine: Searching for c6:df:c:2d:b0:c0 in /var/db/dhcpd_leases ...
	I0916 03:20:20.788489    1732 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0916 03:20:20.788541    1732 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c6:df:c:2d:b0:c0 ID:1,c6:df:c:2d:b0:c0 Lease:0x66e957e3}
	I0916 03:20:20.788556    1732 main.go:141] libmachine: Found match: c6:df:c:2d:b0:c0
	I0916 03:20:20.788594    1732 main.go:141] libmachine: IP: 192.168.105.2
	I0916 03:20:20.788660    1732 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0916 03:20:23.808134    1732 machine.go:93] provisionDockerMachine start ...
	I0916 03:20:23.809384    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:23.809787    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:23.809804    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:20:23.872699    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 03:20:23.872721    1732 buildroot.go:166] provisioning hostname "addons-490000"
	I0916 03:20:23.872850    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:23.873069    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:23.873080    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-490000 && echo "addons-490000" | sudo tee /etc/hostname
	I0916 03:20:23.931544    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-490000
	
	I0916 03:20:23.931655    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:23.931824    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:23.931835    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-490000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-490000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-490000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:20:23.978759    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:20:23.978770    1732 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1133/.minikube}
	I0916 03:20:23.978782    1732 buildroot.go:174] setting up certificates
	I0916 03:20:23.978790    1732 provision.go:84] configureAuth start
	I0916 03:20:23.978795    1732 provision.go:143] copyHostCerts
	I0916 03:20:23.978909    1732 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem (1078 bytes)
	I0916 03:20:23.979148    1732 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem (1123 bytes)
	I0916 03:20:23.979280    1732 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem (1675 bytes)
	I0916 03:20:23.979380    1732 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem org=jenkins.addons-490000 san=[127.0.0.1 192.168.105.2 addons-490000 localhost minikube]
	I0916 03:20:24.066718    1732 provision.go:177] copyRemoteCerts
	I0916 03:20:24.066770    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:20:24.066787    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:24.090169    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:20:24.098719    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 03:20:24.106789    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 03:20:24.114824    1732 provision.go:87] duration metric: took 136.029625ms to configureAuth
	I0916 03:20:24.114834    1732 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:20:24.114940    1732 config.go:182] Loaded profile config "addons-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:20:24.114981    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:24.115087    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:24.115092    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:20:24.156460    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:20:24.156468    1732 buildroot.go:70] root file system type: tmpfs
	I0916 03:20:24.156515    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:20:24.156566    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:24.156671    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:24.156703    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:20:24.199817    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:20:24.199882    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:24.199997    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:24.200005    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:20:25.579024    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:20:25.579038    1732 machine.go:96] duration metric: took 1.770941625s to provisionDockerMachine
	I0916 03:20:25.579044    1732 client.go:171] duration metric: took 19.356681667s to LocalClient.Create
	I0916 03:20:25.579057    1732 start.go:167] duration metric: took 19.35675725s to libmachine.API.Create "addons-490000"
	I0916 03:20:25.579061    1732 start.go:293] postStartSetup for "addons-490000" (driver="qemu2")
	I0916 03:20:25.579066    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:20:25.579139    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:20:25.579149    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:25.604735    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:20:25.607366    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:20:25.607379    1732 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/addons for local assets ...
	I0916 03:20:25.607483    1732 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/files for local assets ...
	I0916 03:20:25.607518    1732 start.go:296] duration metric: took 28.455417ms for postStartSetup
	I0916 03:20:25.607961    1732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/config.json ...
	I0916 03:20:25.608149    1732 start.go:128] duration metric: took 19.731991792s to createHost
	I0916 03:20:25.608191    1732 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:25.608281    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d51190] 0x102d539d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 03:20:25.608286    1732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:20:25.651431    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482025.953238295
	
	I0916 03:20:25.651441    1732 fix.go:216] guest clock: 1726482025.953238295
	I0916 03:20:25.651445    1732 fix.go:229] Guest: 2024-09-16 03:20:25.953238295 -0700 PDT Remote: 2024-09-16 03:20:25.608155 -0700 PDT m=+19.835713709 (delta=345.083295ms)
	I0916 03:20:25.651461    1732 fix.go:200] guest clock delta is within tolerance: 345.083295ms
	I0916 03:20:25.651464    1732 start.go:83] releasing machines lock for "addons-490000", held for 19.775342834s
	I0916 03:20:25.651800    1732 ssh_runner.go:195] Run: cat /version.json
	I0916 03:20:25.651810    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:25.651801    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:20:25.651853    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:25.671361    1732 ssh_runner.go:195] Run: systemctl --version
	I0916 03:20:25.673321    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:20:25.675350    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:20:25.675381    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:20:25.718803    1732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:20:25.718814    1732 start.go:495] detecting cgroup driver to use...
	I0916 03:20:25.718931    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:20:25.726346    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:20:25.730492    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:20:25.734439    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:20:25.734466    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:20:25.738110    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:20:25.742243    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:20:25.746063    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:20:25.750081    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:20:25.753981    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:20:25.758078    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:20:25.762077    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:20:25.766173    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:20:25.770196    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:20:25.773926    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:25.861721    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:20:25.872813    1732 start.go:495] detecting cgroup driver to use...
	I0916 03:20:25.872891    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:20:25.880440    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:20:25.885918    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:20:25.892473    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:20:25.897958    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:20:25.903398    1732 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:20:25.946843    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:20:25.953003    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:20:25.959642    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:20:25.960927    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:20:25.964228    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:20:25.970011    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:20:26.054781    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:20:26.141094    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:20:26.141145    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:20:26.147555    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:26.228823    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:20:28.422326    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.193557417s)
	I0916 03:20:28.422406    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:20:28.428063    1732 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:20:28.434662    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:20:28.440375    1732 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:20:28.533422    1732 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:20:28.608165    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:28.689356    1732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:20:28.696048    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:20:28.701410    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:28.788143    1732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:20:28.812625    1732 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:20:28.812727    1732 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:20:28.815089    1732 start.go:563] Will wait 60s for crictl version
	I0916 03:20:28.815134    1732 ssh_runner.go:195] Run: which crictl
	I0916 03:20:28.816577    1732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:20:28.838923    1732 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:20:28.839001    1732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:20:28.852050    1732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:20:28.869268    1732 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:20:28.869419    1732 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0916 03:20:28.871058    1732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:20:28.875276    1732 kubeadm.go:883] updating cluster {Name:addons-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:20:28.875327    1732 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:28.875380    1732 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:20:28.880055    1732 docker.go:685] Got preloaded images: 
	I0916 03:20:28.880063    1732 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 03:20:28.880119    1732 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:20:28.883509    1732 ssh_runner.go:195] Run: which lz4
	I0916 03:20:28.884965    1732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 03:20:28.886323    1732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 03:20:28.886333    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0916 03:20:30.151556    1732 docker.go:649] duration metric: took 1.266673375s to copy over tarball
	I0916 03:20:30.151643    1732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 03:20:31.094521    1732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 03:20:31.109468    1732 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:20:31.113655    1732 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 03:20:31.119781    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:31.210352    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:20:33.422659    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212352916s)
	I0916 03:20:33.422777    1732 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:20:33.432138    1732 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:20:33.432155    1732 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:20:33.432176    1732 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0916 03:20:33.432253    1732 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-490000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:20:33.432331    1732 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:20:33.453286    1732 cni.go:84] Creating CNI manager for ""
	I0916 03:20:33.453297    1732 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:33.453311    1732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:20:33.453322    1732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-490000 NodeName:addons-490000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:20:33.453394    1732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-490000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:20:33.453460    1732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:20:33.457717    1732 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:20:33.457755    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:20:33.461627    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 03:20:33.467631    1732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:20:33.473403    1732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 03:20:33.479901    1732 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0916 03:20:33.481253    1732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:20:33.485735    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:33.569028    1732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:20:33.579733    1732 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000 for IP: 192.168.105.2
	I0916 03:20:33.579742    1732 certs.go:194] generating shared ca certs ...
	I0916 03:20:33.579753    1732 certs.go:226] acquiring lock for ca certs: {Name:mk7bbdd60870074cef3b6b7f58dae6ae1dc0ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.579955    1732 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key
	I0916 03:20:33.741453    1732 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt ...
	I0916 03:20:33.741464    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt: {Name:mk7da80048730547951745de1dceae059933e325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.741791    1732 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key ...
	I0916 03:20:33.741795    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key: {Name:mkf8ee043e94d335adbd7b86f116231c1f7ef887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.741942    1732 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key
	I0916 03:20:33.916270    1732 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt ...
	I0916 03:20:33.916284    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt: {Name:mk88f52e01c5c572c9e9f1c6b3642a14aff63fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.916526    1732 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key ...
	I0916 03:20:33.916533    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key: {Name:mkbf4ecb38bf319b4d200d20ff5621b64dedeedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.916690    1732 certs.go:256] generating profile certs ...
	I0916 03:20:33.916737    1732 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.key
	I0916 03:20:33.916747    1732 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt with IP's: []
	I0916 03:20:33.962832    1732 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt ...
	I0916 03:20:33.962836    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: {Name:mk8c99cae4e69f9b4ef1b4555cf39a9eb7ba1ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.962988    1732 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.key ...
	I0916 03:20:33.962990    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.key: {Name:mk9904ec3f74729e1e540694bca6cfa14c6fb7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:33.963115    1732 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key.3380c722
	I0916 03:20:33.963128    1732 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt.3380c722 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0916 03:20:34.185346    1732 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt.3380c722 ...
	I0916 03:20:34.185359    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt.3380c722: {Name:mkc2f7f57266d421fdce0c12190fd4d65ce31356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:34.185649    1732 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key.3380c722 ...
	I0916 03:20:34.185655    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key.3380c722: {Name:mkec9df7e0f6765d2295ef002c3207d6d38646d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:34.185792    1732 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt.3380c722 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt
	I0916 03:20:34.185910    1732 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key.3380c722 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key
	I0916 03:20:34.186020    1732 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.key
	I0916 03:20:34.186031    1732 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.crt with IP's: []
	I0916 03:20:34.227232    1732 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.crt ...
	I0916 03:20:34.227236    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.crt: {Name:mk62a4fdd4bf2fe84b7a5936e382e72809e9dac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:34.227380    1732 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.key ...
	I0916 03:20:34.227383    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.key: {Name:mkdfb2f774981654dfcc786bfb639597805aad76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:34.227648    1732 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 03:20:34.227669    1732 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem (1078 bytes)
	I0916 03:20:34.227688    1732 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:20:34.227708    1732 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem (1675 bytes)
	I0916 03:20:34.228126    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:20:34.238221    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:20:34.247093    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:20:34.255650    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:20:34.263513    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 03:20:34.271502    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:20:34.279442    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:20:34.287283    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 03:20:34.295574    1732 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:20:34.303956    1732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:20:34.312376    1732 ssh_runner.go:195] Run: openssl version
	I0916 03:20:34.314809    1732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:20:34.318713    1732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:20:34.320343    1732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:20:34.320368    1732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:20:34.322457    1732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:20:34.326395    1732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:20:34.327869    1732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 03:20:34.327912    1732 kubeadm.go:392] StartCluster: {Name:addons-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:20:34.327990    1732 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:20:34.333582    1732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:20:34.337585    1732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:20:34.341295    1732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:20:34.344941    1732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 03:20:34.344946    1732 kubeadm.go:157] found existing configuration files:
	
	I0916 03:20:34.344974    1732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 03:20:34.348386    1732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 03:20:34.348414    1732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 03:20:34.351684    1732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 03:20:34.354848    1732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 03:20:34.354880    1732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 03:20:34.358436    1732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 03:20:34.361811    1732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 03:20:34.361840    1732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:20:34.365467    1732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 03:20:34.368962    1732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 03:20:34.368989    1732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:20:34.372441    1732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 03:20:34.394823    1732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 03:20:34.394889    1732 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 03:20:34.431627    1732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 03:20:34.431684    1732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 03:20:34.431728    1732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 03:20:34.435750    1732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 03:20:34.458106    1732 out.go:235]   - Generating certificates and keys ...
	I0916 03:20:34.458140    1732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 03:20:34.458169    1732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 03:20:34.512311    1732 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 03:20:34.633268    1732 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 03:20:34.708646    1732 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 03:20:34.864983    1732 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 03:20:34.943125    1732 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 03:20:34.943196    1732 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-490000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0916 03:20:35.093659    1732 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 03:20:35.093743    1732 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-490000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0916 03:20:35.335189    1732 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 03:20:35.417836    1732 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 03:20:35.520135    1732 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 03:20:35.520171    1732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 03:20:35.625614    1732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 03:20:35.879122    1732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 03:20:36.061825    1732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 03:20:36.231242    1732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 03:20:36.430959    1732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 03:20:36.431150    1732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 03:20:36.432465    1732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 03:20:36.443688    1732 out.go:235]   - Booting up control plane ...
	I0916 03:20:36.443739    1732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 03:20:36.443781    1732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 03:20:36.443815    1732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 03:20:36.443872    1732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 03:20:36.444389    1732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 03:20:36.444414    1732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 03:20:36.540502    1732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 03:20:36.540564    1732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 03:20:37.051531    1732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.64025ms
	I0916 03:20:37.051664    1732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 03:20:40.054609    1732 kubeadm.go:310] [api-check] The API server is healthy after 3.003738418s
	I0916 03:20:40.060392    1732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 03:20:40.064529    1732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 03:20:40.072064    1732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 03:20:40.072158    1732 kubeadm.go:310] [mark-control-plane] Marking the node addons-490000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 03:20:40.076277    1732 kubeadm.go:310] [bootstrap-token] Using token: vvj3d0.p1jt2ob3zgytlxgt
	I0916 03:20:40.079306    1732 out.go:235]   - Configuring RBAC rules ...
	I0916 03:20:40.079361    1732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 03:20:40.080353    1732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 03:20:40.088374    1732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 03:20:40.089784    1732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 03:20:40.090695    1732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 03:20:40.091743    1732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 03:20:40.468708    1732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 03:20:40.868746    1732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 03:20:41.458657    1732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 03:20:41.459447    1732 kubeadm.go:310] 
	I0916 03:20:41.459502    1732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 03:20:41.459517    1732 kubeadm.go:310] 
	I0916 03:20:41.459617    1732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 03:20:41.459623    1732 kubeadm.go:310] 
	I0916 03:20:41.459642    1732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 03:20:41.459714    1732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 03:20:41.459778    1732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 03:20:41.459788    1732 kubeadm.go:310] 
	I0916 03:20:41.459834    1732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 03:20:41.459839    1732 kubeadm.go:310] 
	I0916 03:20:41.459891    1732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 03:20:41.459898    1732 kubeadm.go:310] 
	I0916 03:20:41.459958    1732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 03:20:41.460047    1732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 03:20:41.460118    1732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 03:20:41.460131    1732 kubeadm.go:310] 
	I0916 03:20:41.460198    1732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 03:20:41.460287    1732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 03:20:41.460299    1732 kubeadm.go:310] 
	I0916 03:20:41.460369    1732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vvj3d0.p1jt2ob3zgytlxgt \
	I0916 03:20:41.460475    1732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 \
	I0916 03:20:41.460497    1732 kubeadm.go:310] 	--control-plane 
	I0916 03:20:41.460501    1732 kubeadm.go:310] 
	I0916 03:20:41.460585    1732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 03:20:41.460601    1732 kubeadm.go:310] 
	I0916 03:20:41.460671    1732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vvj3d0.p1jt2ob3zgytlxgt \
	I0916 03:20:41.460784    1732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 
	I0916 03:20:41.461084    1732 kubeadm.go:310] W0916 10:20:34.694113    1590 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:20:41.461397    1732 kubeadm.go:310] W0916 10:20:34.696174    1590 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:20:41.461516    1732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 03:20:41.461531    1732 cni.go:84] Creating CNI manager for ""
	I0916 03:20:41.461544    1732 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:41.469752    1732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 03:20:41.474845    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 03:20:41.481803    1732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 03:20:41.491468    1732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:20:41.491563    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:41.491632    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-490000 minikube.k8s.io/updated_at=2024_09_16T03_20_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-490000 minikube.k8s.io/primary=true
	I0916 03:20:41.553581    1732 ops.go:34] apiserver oom_adj: -16
	I0916 03:20:41.553625    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:42.055743    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:42.555784    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:43.055148    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:43.555723    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:44.055604    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:44.555668    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:45.055714    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:45.555543    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:46.055638    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:46.555516    1732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:20:46.587264    1732 kubeadm.go:1113] duration metric: took 5.095924208s to wait for elevateKubeSystemPrivileges
	I0916 03:20:46.587281    1732 kubeadm.go:394] duration metric: took 12.259760125s to StartCluster
	I0916 03:20:46.587290    1732 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:46.587467    1732 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:20:46.587647    1732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:46.587895    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 03:20:46.587921    1732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:20:46.587945    1732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 03:20:46.587997    1732 addons.go:69] Setting yakd=true in profile "addons-490000"
	I0916 03:20:46.588004    1732 addons.go:234] Setting addon yakd=true in "addons-490000"
	I0916 03:20:46.588004    1732 addons.go:69] Setting inspektor-gadget=true in profile "addons-490000"
	I0916 03:20:46.588009    1732 config.go:182] Loaded profile config "addons-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:20:46.588012    1732 addons.go:234] Setting addon inspektor-gadget=true in "addons-490000"
	I0916 03:20:46.588017    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588030    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588044    1732 addons.go:69] Setting storage-provisioner=true in profile "addons-490000"
	I0916 03:20:46.588048    1732 addons.go:234] Setting addon storage-provisioner=true in "addons-490000"
	I0916 03:20:46.588055    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588057    1732 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-490000"
	I0916 03:20:46.588066    1732 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-490000"
	I0916 03:20:46.588079    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588121    1732 addons.go:69] Setting default-storageclass=true in profile "addons-490000"
	I0916 03:20:46.588139    1732 addons.go:69] Setting ingress=true in profile "addons-490000"
	I0916 03:20:46.588233    1732 addons.go:234] Setting addon ingress=true in "addons-490000"
	I0916 03:20:46.588272    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588148    1732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-490000"
	I0916 03:20:46.588123    1732 addons.go:69] Setting registry=true in profile "addons-490000"
	I0916 03:20:46.588394    1732 addons.go:234] Setting addon registry=true in "addons-490000"
	I0916 03:20:46.588401    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588431    1732 retry.go:31] will retry after 536.158534ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588156    1732 addons.go:69] Setting cloud-spanner=true in profile "addons-490000"
	I0916 03:20:46.588472    1732 addons.go:234] Setting addon cloud-spanner=true in "addons-490000"
	I0916 03:20:46.588480    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588538    1732 retry.go:31] will retry after 655.457714ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588161    1732 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-490000"
	I0916 03:20:46.588600    1732 retry.go:31] will retry after 954.173915ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588605    1732 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-490000"
	I0916 03:20:46.588614    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588169    1732 addons.go:69] Setting metrics-server=true in profile "addons-490000"
	I0916 03:20:46.588641    1732 addons.go:234] Setting addon metrics-server=true in "addons-490000"
	I0916 03:20:46.588684    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588178    1732 addons.go:69] Setting volcano=true in profile "addons-490000"
	I0916 03:20:46.588706    1732 addons.go:234] Setting addon volcano=true in "addons-490000"
	I0916 03:20:46.588726    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588729    1732 retry.go:31] will retry after 846.360285ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588185    1732 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-490000"
	I0916 03:20:46.588737    1732 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-490000"
	I0916 03:20:46.588737    1732 retry.go:31] will retry after 679.189771ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588192    1732 addons.go:69] Setting volumesnapshots=true in profile "addons-490000"
	I0916 03:20:46.588747    1732 addons.go:234] Setting addon volumesnapshots=true in "addons-490000"
	I0916 03:20:46.588754    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588793    1732 retry.go:31] will retry after 832.584202ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588816    1732 retry.go:31] will retry after 1.252679317s: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588855    1732 retry.go:31] will retry after 943.27077ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588199    1732 addons.go:69] Setting ingress-dns=true in profile "addons-490000"
	I0916 03:20:46.588890    1732 retry.go:31] will retry after 1.41040527s: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.588899    1732 addons.go:234] Setting addon ingress-dns=true in "addons-490000"
	I0916 03:20:46.588926    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:46.588204    1732 addons.go:69] Setting gcp-auth=true in profile "addons-490000"
	I0916 03:20:46.588944    1732 mustload.go:65] Loading cluster: addons-490000
	I0916 03:20:46.588953    1732 retry.go:31] will retry after 781.701475ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.589013    1732 config.go:182] Loaded profile config "addons-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:20:46.589140    1732 retry.go:31] will retry after 731.636899ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.589141    1732 retry.go:31] will retry after 843.785085ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:46.591306    1732 out.go:177] * Verifying Kubernetes components...
	I0916 03:20:46.595200    1732 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 03:20:46.595208    1732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:20:46.595218    1732 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 03:20:46.599253    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:46.602191    1732 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 03:20:46.602476    1732 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 03:20:46.602489    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:46.606253    1732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:20:46.606260    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:20:46.606266    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:46.609242    1732 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 03:20:46.609247    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 03:20:46.609252    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:46.637795    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 03:20:46.712749    1732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:20:46.761404    1732 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 03:20:46.761418    1732 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 03:20:46.772131    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 03:20:46.785133    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:20:46.793937    1732 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 03:20:46.793955    1732 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 03:20:46.833834    1732 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 03:20:46.833847    1732 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 03:20:46.889543    1732 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 03:20:46.889553    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 03:20:46.936716    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 03:20:46.954155    1732 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0916 03:20:46.955679    1732 node_ready.go:35] waiting up to 6m0s for node "addons-490000" to be "Ready" ...
	I0916 03:20:46.972600    1732 node_ready.go:49] node "addons-490000" has status "Ready":"True"
	I0916 03:20:46.972618    1732 node_ready.go:38] duration metric: took 16.914125ms for node "addons-490000" to be "Ready" ...
	I0916 03:20:46.972624    1732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:20:46.986947    1732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace to be "Ready" ...
	I0916 03:20:47.131850    1732 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 03:20:47.135855    1732 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 03:20:47.135869    1732 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 03:20:47.135881    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.250835    1732 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 03:20:47.254892    1732 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 03:20:47.254906    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 03:20:47.254917    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.271852    1732 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 03:20:47.273230    1732 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 03:20:47.276856    1732 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 03:20:47.276870    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 03:20:47.276881    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.326793    1732 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 03:20:47.334786    1732 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 03:20:47.340155    1732 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 03:20:47.340173    1732 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 03:20:47.343714    1732 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 03:20:47.349320    1732 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 03:20:47.349331    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 03:20:47.349477    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.364993    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 03:20:47.375856    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 03:20:47.378871    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 03:20:47.378879    1732 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 03:20:47.378893    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.411666    1732 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 03:20:47.411679    1732 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 03:20:47.416018    1732 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 03:20:47.416028    1732 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 03:20:47.424774    1732 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 03:20:47.427883    1732 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 03:20:47.439035    1732 retry.go:31] will retry after 1.522264799s: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/monitor: connect: connection refused
	I0916 03:20:47.439109    1732 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 03:20:47.439116    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 03:20:47.439778    1732 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 03:20:47.442699    1732 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 03:20:47.445894    1732 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 03:20:47.445901    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 03:20:47.445911    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.448865    1732 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 03:20:47.448873    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 03:20:47.448881    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.451664    1732 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 03:20:47.451672    1732 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 03:20:47.456602    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 03:20:47.457591    1732 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-490000" context rescaled to 1 replicas
	I0916 03:20:47.480490    1732 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 03:20:47.480502    1732 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 03:20:47.509753    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 03:20:47.515958    1732 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 03:20:47.515967    1732 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 03:20:47.516522    1732 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 03:20:47.516529    1732 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 03:20:47.534216    1732 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-490000"
	I0916 03:20:47.534240    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:47.539344    1732 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 03:20:47.549287    1732 out.go:177]   - Using image docker.io/busybox:stable
	I0916 03:20:47.552730    1732 addons.go:234] Setting addon default-storageclass=true in "addons-490000"
	I0916 03:20:47.552752    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:47.553305    1732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:20:47.553311    1732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:20:47.553317    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.555299    1732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 03:20:47.555307    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 03:20:47.555313    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.555422    1732 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 03:20:47.555429    1732 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 03:20:47.566234    1732 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 03:20:47.566246    1732 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 03:20:47.591130    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 03:20:47.635611    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 03:20:47.638213    1732 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 03:20:47.638230    1732 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 03:20:47.642100    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 03:20:47.642106    1732 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 03:20:47.688162    1732 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 03:20:47.688172    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 03:20:47.700605    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 03:20:47.709343    1732 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-490000 service yakd-dashboard -n yakd-dashboard
	
	I0916 03:20:47.761541    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 03:20:47.804547    1732 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 03:20:47.804558    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 03:20:47.812575    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:20:47.848332    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 03:20:47.851292    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 03:20:47.860326    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 03:20:47.867512    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 03:20:47.878332    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 03:20:47.885281    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 03:20:47.892335    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 03:20:47.900339    1732 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 03:20:47.903345    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 03:20:47.903355    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 03:20:47.903367    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:47.930563    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 03:20:48.004201    1732 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 03:20:48.010285    1732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 03:20:48.010300    1732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 03:20:48.010311    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:48.133948    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 03:20:48.133964    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 03:20:48.239168    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 03:20:48.239181    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 03:20:48.347180    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 03:20:48.347193    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 03:20:48.404331    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 03:20:48.404345    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 03:20:48.407867    1732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 03:20:48.407875    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 03:20:48.506377    1732 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 03:20:48.506396    1732 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 03:20:48.512895    1732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 03:20:48.512909    1732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 03:20:48.566029    1732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 03:20:48.566043    1732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 03:20:48.575439    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 03:20:48.575449    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 03:20:48.615588    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 03:20:48.659553    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 03:20:48.659566    1732 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 03:20:48.757325    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 03:20:48.757336    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 03:20:48.852941    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 03:20:48.852955    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 03:20:48.963852    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:48.993310    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status "Ready":"False"
	I0916 03:20:49.021551    1732 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 03:20:49.021566    1732 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 03:20:49.260858    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 03:20:51.002170    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status "Ready":"False"
	I0916 03:20:51.231798    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.775301459s)
	I0916 03:20:51.231813    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.722163916s)
	I0916 03:20:51.231838    1732 addons.go:475] Verifying addon registry=true in "addons-490000"
	I0916 03:20:51.231925    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.640900542s)
	I0916 03:20:51.231931    1732 addons.go:475] Verifying addon ingress=true in "addons-490000"
	I0916 03:20:51.231980    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.596475542s)
	I0916 03:20:51.232002    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.531499959s)
	I0916 03:20:51.232053    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.470607041s)
	W0916 03:20:51.232293    1732 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 03:20:51.232058    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.4195795s)
	I0916 03:20:51.232306    1732 retry.go:31] will retry after 263.366905ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 03:20:51.232096    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.301625542s)
	I0916 03:20:51.232121    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.616605625s)
	I0916 03:20:51.232341    1732 addons.go:475] Verifying addon metrics-server=true in "addons-490000"
	I0916 03:20:51.235849    1732 out.go:177] * Verifying registry addon...
	I0916 03:20:51.244756    1732 out.go:177] * Verifying ingress addon...
	I0916 03:20:51.252195    1732 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 03:20:51.256152    1732 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 03:20:51.263270    1732 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 03:20:51.263279    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:51.263345    1732 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 03:20:51.263352    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 03:20:51.264023    1732 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 03:20:51.495957    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 03:20:51.694397    1732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.433595292s)
	I0916 03:20:51.694414    1732 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-490000"
	I0916 03:20:51.698656    1732 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 03:20:51.707180    1732 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 03:20:51.721834    1732 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 03:20:51.721844    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:51.822810    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:51.823007    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:52.212007    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:52.314708    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:52.314772    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:52.711604    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:52.813861    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:52.813930    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:53.211532    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:53.255971    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:53.257644    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:53.491696    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status "Ready":"False"
	I0916 03:20:53.711698    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:53.755824    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:53.757787    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:54.211676    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:54.313072    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:54.313360    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:54.711878    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:54.756140    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:54.757734    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:55.211493    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:55.255421    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:55.257941    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:55.711923    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:55.756441    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:55.758427    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:55.992692    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status "Ready":"False"
	I0916 03:20:56.212716    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:56.312943    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:56.313070    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:56.711682    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:56.755632    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:56.757519    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:56.969947    1732 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 03:20:56.969962    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:57.023246    1732 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 03:20:57.029448    1732 addons.go:234] Setting addon gcp-auth=true in "addons-490000"
	I0916 03:20:57.029471    1732 host.go:66] Checking if "addons-490000" exists ...
	I0916 03:20:57.030288    1732 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 03:20:57.030297    1732 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/addons-490000/id_rsa Username:docker}
	I0916 03:20:57.059532    1732 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 03:20:57.079037    1732 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 03:20:57.103756    1732 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 03:20:57.103765    1732 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 03:20:57.110568    1732 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 03:20:57.110576    1732 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 03:20:57.120453    1732 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 03:20:57.120460    1732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 03:20:57.129755    1732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 03:20:57.211535    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:57.255853    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:57.257559    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:57.416802    1732 addons.go:475] Verifying addon gcp-auth=true in "addons-490000"
	I0916 03:20:57.420428    1732 out.go:177] * Verifying gcp-auth addon...
	I0916 03:20:57.427880    1732 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 03:20:57.429074    1732 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 03:20:57.711607    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:57.771171    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:57.771323    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:57.991512    1732 pod_ready.go:98] pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:57 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[{IP:192.168.105
.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 03:20:47 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 03:20:47 -0700 PDT,FinishedAt:2024-09-16 03:20:57 -0700 PDT,ContainerID:docker://e4a4a5e96d4fc8a42c56cae4d54247b72a7031a22db3ea3d9b40b9c7e6075c49,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://e4a4a5e96d4fc8a42c56cae4d54247b72a7031a22db3ea3d9b40b9c7e6075c49 Started:0x14001f509a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14000648540} {Name:kube-api-access-7vv2d MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0x14000648570}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 03:20:57.991526    1732 pod_ready.go:82] duration metric: took 11.0049135s for pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace to be "Ready" ...
	E0916 03:20:57.991531    1732 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-cxjdd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:57 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 03:20:47 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.1
05.2 HostIPs:[{IP:192.168.105.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 03:20:47 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 03:20:47 -0700 PDT,FinishedAt:2024-09-16 03:20:57 -0700 PDT,ContainerID:docker://e4a4a5e96d4fc8a42c56cae4d54247b72a7031a22db3ea3d9b40b9c7e6075c49,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://e4a4a5e96d4fc8a42c56cae4d54247b72a7031a22db3ea3d9b40b9c7e6075c49 Started:0x14001f509a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14000648540} {Name:kube-api-access-7vv2d MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x14000648570}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 03:20:57.991539    1732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace to be "Ready" ...
	I0916 03:20:58.211645    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:58.255655    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:58.257449    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:58.711436    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:58.755694    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:58.757356    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:59.211914    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:59.255780    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:59.257520    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:59.712037    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:20:59.756077    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:20:59.758206    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:20:59.995962    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:00.211585    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:00.312557    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:00.312840    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:00.711477    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:00.755516    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:00.757559    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:01.211375    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:01.256199    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:01.258351    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:01.711218    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:01.755507    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:01.757534    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:01.996299    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:02.211335    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:02.255645    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:02.257536    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:02.711286    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:02.755407    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:02.757431    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:03.211958    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:03.255617    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:03.258156    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:03.713491    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:03.756733    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:03.759062    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:04.001269    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:04.216226    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:04.255825    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:04.257474    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:04.711352    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:04.811648    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:04.811885    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:05.211872    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:05.256083    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:05.258347    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:05.711246    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:05.755520    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:05.757417    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:06.209771    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:06.256034    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:06.257713    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:06.495486    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:06.711467    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:06.755430    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:06.757149    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:07.211171    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:07.255430    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:07.257200    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:07.711228    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:07.811707    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:07.811813    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:08.212435    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:08.255999    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:08.258375    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:08.497799    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:08.711877    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:08.756320    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:08.758282    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:09.211185    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:09.255511    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:09.257416    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:09.710615    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:09.754547    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:09.757404    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:10.211071    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:10.255249    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:10.257059    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:10.711143    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:10.755594    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:10.756983    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:10.996074    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:11.210754    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:11.255308    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:11.257168    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:11.710943    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:11.753703    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:11.757712    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:12.210643    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:12.255637    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 03:21:12.257036    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:12.710872    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:12.755035    1732 kapi.go:107] duration metric: took 21.503523542s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 03:21:12.756944    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:12.997299    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:13.213027    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:13.262119    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:13.711196    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:13.760942    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:14.211002    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:14.259922    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:14.710652    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:14.760894    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:15.210711    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:15.259583    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:15.495715    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:15.710762    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:15.759670    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:16.211006    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:16.259128    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:16.709710    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:16.759489    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:17.210825    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:17.259640    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:17.495955    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:17.710821    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:17.758187    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:18.211113    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:18.259423    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:18.711060    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:18.759677    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:19.210690    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:19.259265    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:19.710781    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:19.758008    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:19.995846    1732 pod_ready.go:103] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"False"
	I0916 03:21:20.211177    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:20.259229    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:20.495683    1732 pod_ready.go:93] pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.495693    1732 pod_ready.go:82] duration metric: took 22.504865375s for pod "coredns-7c65d6cfc9-pglr2" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.495697    1732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.497562    1732 pod_ready.go:93] pod "etcd-addons-490000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.497571    1732 pod_ready.go:82] duration metric: took 1.870209ms for pod "etcd-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.497575    1732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.499528    1732 pod_ready.go:93] pod "kube-apiserver-addons-490000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.499536    1732 pod_ready.go:82] duration metric: took 1.95825ms for pod "kube-apiserver-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.499540    1732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.501406    1732 pod_ready.go:93] pod "kube-controller-manager-addons-490000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.501412    1732 pod_ready.go:82] duration metric: took 1.868625ms for pod "kube-controller-manager-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.501415    1732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cbxhg" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.506460    1732 pod_ready.go:93] pod "kube-proxy-cbxhg" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.506466    1732 pod_ready.go:82] duration metric: took 5.048042ms for pod "kube-proxy-cbxhg" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.506470    1732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.719394    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:20.759430    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:20.896304    1732 pod_ready.go:93] pod "kube-scheduler-addons-490000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:21:20.896315    1732 pod_ready.go:82] duration metric: took 389.855041ms for pod "kube-scheduler-addons-490000" in "kube-system" namespace to be "Ready" ...
	I0916 03:21:20.896319    1732 pod_ready.go:39] duration metric: took 33.92476525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:21:20.896329    1732 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:21:20.896404    1732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:21:20.904423    1732 api_server.go:72] duration metric: took 34.31758025s to wait for apiserver process to appear ...
	I0916 03:21:20.904431    1732 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:21:20.904440    1732 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0916 03:21:20.907522    1732 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0916 03:21:20.908308    1732 api_server.go:141] control plane version: v1.31.1
	I0916 03:21:20.908315    1732 api_server.go:131] duration metric: took 3.881084ms to wait for apiserver health ...
	I0916 03:21:20.908319    1732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:21:21.099929    1732 system_pods.go:59] 17 kube-system pods found
	I0916 03:21:21.099940    1732 system_pods.go:61] "coredns-7c65d6cfc9-pglr2" [10db40da-72f6-4dcb-9014-5f543ddf4396] Running
	I0916 03:21:21.099944    1732 system_pods.go:61] "csi-hostpath-attacher-0" [406d1e84-e21c-4577-88fa-79c450346004] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 03:21:21.099946    1732 system_pods.go:61] "csi-hostpath-resizer-0" [7d14cc7f-b220-43b0-bb9b-eeb6f4ef8b45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 03:21:21.099950    1732 system_pods.go:61] "csi-hostpathplugin-7lzhv" [f4a93790-4c8b-473f-874c-9a3e3f9792a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 03:21:21.099952    1732 system_pods.go:61] "etcd-addons-490000" [e3270265-77e8-49dd-8ec6-3abf396501c7] Running
	I0916 03:21:21.099954    1732 system_pods.go:61] "kube-apiserver-addons-490000" [d14e6df0-d65a-4754-8685-fd53f452be8c] Running
	I0916 03:21:21.099956    1732 system_pods.go:61] "kube-controller-manager-addons-490000" [35673539-d9b1-41a1-baba-a9cc52d45345] Running
	I0916 03:21:21.099958    1732 system_pods.go:61] "kube-ingress-dns-minikube" [7c0be0b3-dd91-4531-b4f9-245d908e2e48] Running
	I0916 03:21:21.099959    1732 system_pods.go:61] "kube-proxy-cbxhg" [9a4a0192-1fd0-4c33-87c3-44de01ac6a4b] Running
	I0916 03:21:21.099961    1732 system_pods.go:61] "kube-scheduler-addons-490000" [0787c9c5-997c-4f8d-9145-b36ff4bbf923] Running
	I0916 03:21:21.099963    1732 system_pods.go:61] "metrics-server-84c5f94fbc-49wsl" [a33f499e-d3e9-4aa6-a561-f8d7a17f8390] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 03:21:21.099965    1732 system_pods.go:61] "nvidia-device-plugin-daemonset-xr4jg" [4f23bf82-a6dd-44ab-af49-c86decb6acad] Running
	I0916 03:21:21.099967    1732 system_pods.go:61] "registry-66c9cd494c-fwzqm" [356ec898-bcc6-438e-88a6-3e2540fbe09a] Running
	I0916 03:21:21.099968    1732 system_pods.go:61] "registry-proxy-hjk6w" [997df24f-5154-460b-ab90-cfa8f452443b] Running
	I0916 03:21:21.099970    1732 system_pods.go:61] "snapshot-controller-56fcc65765-nf4b2" [b6019bda-9dd6-4b45-aafd-ed0929a688f5] Running
	I0916 03:21:21.099973    1732 system_pods.go:61] "snapshot-controller-56fcc65765-sbkqf" [091bf214-22c8-4f24-a753-3fc346119cad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 03:21:21.099975    1732 system_pods.go:61] "storage-provisioner" [9ac22c2b-396a-44a1-9d67-eb43c399da5c] Running
	I0916 03:21:21.099979    1732 system_pods.go:74] duration metric: took 191.663542ms to wait for pod list to return data ...
	I0916 03:21:21.099983    1732 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:21:21.209465    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:21.259204    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:21.296349    1732 default_sa.go:45] found service account: "default"
	I0916 03:21:21.296360    1732 default_sa.go:55] duration metric: took 196.380708ms for default service account to be created ...
	I0916 03:21:21.296364    1732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:21:21.499554    1732 system_pods.go:86] 17 kube-system pods found
	I0916 03:21:21.499563    1732 system_pods.go:89] "coredns-7c65d6cfc9-pglr2" [10db40da-72f6-4dcb-9014-5f543ddf4396] Running
	I0916 03:21:21.499567    1732 system_pods.go:89] "csi-hostpath-attacher-0" [406d1e84-e21c-4577-88fa-79c450346004] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 03:21:21.499570    1732 system_pods.go:89] "csi-hostpath-resizer-0" [7d14cc7f-b220-43b0-bb9b-eeb6f4ef8b45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 03:21:21.499573    1732 system_pods.go:89] "csi-hostpathplugin-7lzhv" [f4a93790-4c8b-473f-874c-9a3e3f9792a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 03:21:21.499575    1732 system_pods.go:89] "etcd-addons-490000" [e3270265-77e8-49dd-8ec6-3abf396501c7] Running
	I0916 03:21:21.499578    1732 system_pods.go:89] "kube-apiserver-addons-490000" [d14e6df0-d65a-4754-8685-fd53f452be8c] Running
	I0916 03:21:21.499580    1732 system_pods.go:89] "kube-controller-manager-addons-490000" [35673539-d9b1-41a1-baba-a9cc52d45345] Running
	I0916 03:21:21.499582    1732 system_pods.go:89] "kube-ingress-dns-minikube" [7c0be0b3-dd91-4531-b4f9-245d908e2e48] Running
	I0916 03:21:21.499584    1732 system_pods.go:89] "kube-proxy-cbxhg" [9a4a0192-1fd0-4c33-87c3-44de01ac6a4b] Running
	I0916 03:21:21.499594    1732 system_pods.go:89] "kube-scheduler-addons-490000" [0787c9c5-997c-4f8d-9145-b36ff4bbf923] Running
	I0916 03:21:21.499598    1732 system_pods.go:89] "metrics-server-84c5f94fbc-49wsl" [a33f499e-d3e9-4aa6-a561-f8d7a17f8390] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 03:21:21.499601    1732 system_pods.go:89] "nvidia-device-plugin-daemonset-xr4jg" [4f23bf82-a6dd-44ab-af49-c86decb6acad] Running
	I0916 03:21:21.499603    1732 system_pods.go:89] "registry-66c9cd494c-fwzqm" [356ec898-bcc6-438e-88a6-3e2540fbe09a] Running
	I0916 03:21:21.499608    1732 system_pods.go:89] "registry-proxy-hjk6w" [997df24f-5154-460b-ab90-cfa8f452443b] Running
	I0916 03:21:21.499610    1732 system_pods.go:89] "snapshot-controller-56fcc65765-nf4b2" [b6019bda-9dd6-4b45-aafd-ed0929a688f5] Running
	I0916 03:21:21.499613    1732 system_pods.go:89] "snapshot-controller-56fcc65765-sbkqf" [091bf214-22c8-4f24-a753-3fc346119cad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 03:21:21.499616    1732 system_pods.go:89] "storage-provisioner" [9ac22c2b-396a-44a1-9d67-eb43c399da5c] Running
	I0916 03:21:21.499620    1732 system_pods.go:126] duration metric: took 203.25875ms to wait for k8s-apps to be running ...
	I0916 03:21:21.499624    1732 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:21:21.499680    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:21:21.505293    1732 system_svc.go:56] duration metric: took 5.667292ms WaitForService to wait for kubelet
	I0916 03:21:21.505303    1732 kubeadm.go:582] duration metric: took 34.918481292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:21:21.505313    1732 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:21:21.696687    1732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:21:21.696700    1732 node_conditions.go:123] node cpu capacity is 2
	I0916 03:21:21.696706    1732 node_conditions.go:105] duration metric: took 191.396542ms to run NodePressure ...
	I0916 03:21:21.696716    1732 start.go:241] waiting for startup goroutines ...
	I0916 03:21:21.710398    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:21.810777    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:22.210702    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:22.259379    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:22.710708    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:22.759085    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:23.211098    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:23.258910    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:23.710584    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:23.759109    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:24.210701    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:24.259184    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:24.713055    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:24.758336    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:25.210586    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:25.259704    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:25.710744    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:25.757379    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:26.210616    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:26.259181    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:26.710297    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:26.758873    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:27.210596    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:27.258832    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:27.710512    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:27.759350    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:28.210760    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:28.258595    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:28.713448    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:28.760114    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:29.217203    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:29.260112    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:29.710248    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:29.758888    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:30.210968    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:30.257943    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:30.713895    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:30.760656    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:31.210698    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:31.258653    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:31.710351    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:31.758878    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:32.210749    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:32.258815    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:32.710532    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:32.758919    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:33.210580    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:33.258394    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:33.710226    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:33.758634    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:34.210530    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:34.311959    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:34.716382    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:34.764088    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:35.212150    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:35.258174    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:35.711034    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:35.758682    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:36.210470    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:36.258638    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:36.711097    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:36.811895    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:37.210350    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:37.258538    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:37.710120    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:37.758855    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:38.210198    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:38.258643    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:38.710516    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:38.758863    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:39.210974    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:39.258390    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:39.709973    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:39.758325    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:40.210238    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:40.258581    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:40.709012    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:40.758657    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:41.213960    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:41.258812    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:41.708715    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:41.758616    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:42.209840    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:42.258523    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:42.708512    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:42.758959    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:43.210200    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:43.258619    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:43.710117    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:43.758216    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:44.209881    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:44.258285    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:44.710147    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:44.758235    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:45.209789    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:45.258527    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:45.712270    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 03:21:45.760535    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:46.215349    1732 kapi.go:107] duration metric: took 54.509898875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 03:21:46.258068    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:46.763520    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:47.261527    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:47.768461    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:48.262309    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:48.763035    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:49.260327    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:49.765246    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:50.258656    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:50.761028    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:51.258655    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:51.762437    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:52.260104    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:52.761072    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:53.262649    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:53.765437    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:54.264537    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:54.764611    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:55.261342    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:55.758272    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:56.259831    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:56.758199    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:57.257924    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:57.757916    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:58.258259    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:58.758379    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:59.258116    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:21:59.758281    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:00.256365    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:00.757951    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:01.258257    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:01.757980    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:02.256102    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:02.759720    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:03.259486    1732 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 03:22:03.758912    1732 kapi.go:107] duration metric: took 1m12.505061s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 03:22:19.930024    1732 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 03:22:19.930043    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:20.431640    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:20.930559    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:21.428786    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:21.931750    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:22.432929    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:22.930813    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:23.432897    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:23.931871    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:24.433199    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:24.931641    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:25.430972    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:25.930527    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:26.435020    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:26.930272    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:27.428954    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:27.928801    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:28.429782    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:28.929179    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:29.435291    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:29.933082    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:30.430479    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:30.929658    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:31.429045    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:31.929819    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:32.430228    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:32.930574    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:33.432720    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:33.934566    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:34.431591    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:34.937193    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:35.434294    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:35.931371    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:36.431554    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:36.933385    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:37.429829    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:37.929575    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:38.428903    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:38.933233    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:39.433134    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:39.932798    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:40.433413    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:40.934125    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:41.428482    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:41.929321    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:42.433716    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:42.934492    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:43.432864    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:43.934365    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:44.431744    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:44.935769    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:45.434526    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:45.935697    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:46.429288    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:46.933888    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:47.433342    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:47.933627    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:48.432762    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:48.935364    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:49.429572    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:49.934870    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:50.437746    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:50.930164    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:51.426972    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:51.935350    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:52.429351    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:52.933312    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:53.433921    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:53.934044    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:54.429688    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:54.932401    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:55.429070    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:55.932390    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:56.429566    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:56.932040    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:57.428522    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:57.934309    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:58.431088    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:58.932911    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:59.428336    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:22:59.928256    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:00.427645    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:00.928226    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:01.428694    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:01.926158    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:02.428102    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:02.927762    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:03.428902    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:03.934227    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:04.429649    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:04.933302    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:05.434230    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:05.931872    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:06.433247    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:06.937843    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:07.434649    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:07.929822    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:08.432309    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:08.934533    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:09.428460    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:09.932223    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:10.429816    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:10.928309    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:11.427908    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:11.929348    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:12.432788    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:12.929529    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:13.433837    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:13.934547    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:14.435150    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:14.929236    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:15.428473    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:15.933504    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:16.433198    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:16.933911    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:17.432911    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:17.934385    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:18.433481    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:18.932031    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:19.433370    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:19.929716    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:20.434504    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:20.928090    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:21.427843    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:21.929669    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:22.427517    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:22.927846    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:23.429553    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:23.927200    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:24.426973    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:24.927170    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:25.427678    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:25.928044    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:26.427140    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:26.926805    1732 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 03:23:27.427918    1732 kapi.go:107] duration metric: took 2m30.004792875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 03:23:27.433433    1732 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-490000 cluster.
	I0916 03:23:27.436328    1732 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 03:23:27.439164    1732 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 03:23:27.443353    1732 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, yakd, volcano, cloud-spanner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 03:23:27.448127    1732 addons.go:510] duration metric: took 2m40.865299958s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin yakd volcano cloud-spanner inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 03:23:27.448165    1732 start.go:246] waiting for cluster config update ...
	I0916 03:23:27.448193    1732 start.go:255] writing updated cluster config ...
	I0916 03:23:27.448893    1732 ssh_runner.go:195] Run: rm -f paused
	I0916 03:23:27.616001    1732 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:23:27.620274    1732 out.go:201] 
	W0916 03:23:27.624296    1732 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:23:27.628176    1732 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:23:27.634237    1732 out.go:177] * Done! kubectl is now configured to use "addons-490000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:33:15 addons-490000 dockerd[1284]: time="2024-09-16T10:33:15.818609116Z" level=info msg="shim disconnected" id=f131ea7f663ad2c29f97887e330ac85afffd413e33006cc3b59205679cb99cb1 namespace=moby
	Sep 16 10:33:15 addons-490000 dockerd[1284]: time="2024-09-16T10:33:15.818638461Z" level=warning msg="cleaning up after shim disconnected" id=f131ea7f663ad2c29f97887e330ac85afffd413e33006cc3b59205679cb99cb1 namespace=moby
	Sep 16 10:33:15 addons-490000 dockerd[1284]: time="2024-09-16T10:33:15.818642463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:33:15 addons-490000 dockerd[1284]: time="2024-09-16T10:33:15.826459958Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:33:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1276]: time="2024-09-16T10:33:17.514985972Z" level=info msg="ignoring event" container=bd528970b9070d13141fa31e098a713920669b7f6c5bb84e8f1bb2371615eaba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.515067545Z" level=info msg="shim disconnected" id=bd528970b9070d13141fa31e098a713920669b7f6c5bb84e8f1bb2371615eaba namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.515115021Z" level=warning msg="cleaning up after shim disconnected" id=bd528970b9070d13141fa31e098a713920669b7f6c5bb84e8f1bb2371615eaba namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.515120481Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.661764745Z" level=info msg="shim disconnected" id=a2b5c634cec6b714318a8bf640765780e9c6e79f150f0e68983931e100c7596a namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.661834939Z" level=warning msg="cleaning up after shim disconnected" id=a2b5c634cec6b714318a8bf640765780e9c6e79f150f0e68983931e100c7596a namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.661852028Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1276]: time="2024-09-16T10:33:17.663298035Z" level=info msg="ignoring event" container=a2b5c634cec6b714318a8bf640765780e9c6e79f150f0e68983931e100c7596a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.673329335Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:33:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1276]: time="2024-09-16T10:33:17.703854899Z" level=info msg="ignoring event" container=e39f91b2a1535dc6805f6a8ec44e1eef550ec0d017136738b8e95f6c2d1b2a1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.704098909Z" level=info msg="shim disconnected" id=e39f91b2a1535dc6805f6a8ec44e1eef550ec0d017136738b8e95f6c2d1b2a1b namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.704293399Z" level=warning msg="cleaning up after shim disconnected" id=e39f91b2a1535dc6805f6a8ec44e1eef550ec0d017136738b8e95f6c2d1b2a1b namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.704298484Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1276]: time="2024-09-16T10:33:17.767064805Z" level=info msg="ignoring event" container=a0edec73c2a83bf6bb3620fcdb9ea976e76ae4ef114feb081cbccc5b5905aadf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.767615597Z" level=info msg="shim disconnected" id=a0edec73c2a83bf6bb3620fcdb9ea976e76ae4ef114feb081cbccc5b5905aadf namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.767690125Z" level=warning msg="cleaning up after shim disconnected" id=a0edec73c2a83bf6bb3620fcdb9ea976e76ae4ef114feb081cbccc5b5905aadf namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.767707256Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.821480795Z" level=info msg="shim disconnected" id=3d5832bf54f7813af6503a0be091e4fa74c0fc6407bc9603af605dee7718910f namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1276]: time="2024-09-16T10:33:17.821562618Z" level=info msg="ignoring event" container=3d5832bf54f7813af6503a0be091e4fa74c0fc6407bc9603af605dee7718910f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.821812129Z" level=warning msg="cleaning up after shim disconnected" id=3d5832bf54f7813af6503a0be091e4fa74c0fc6407bc9603af605dee7718910f namespace=moby
	Sep 16 10:33:17 addons-490000 dockerd[1284]: time="2024-09-16T10:33:17.821821507Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	e1974fd662273       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  5 seconds ago       Running             hello-world-app            0                   7d88e5cec27ce       hello-world-app-55bf9c44b4-fdgpf
	51970151144c8       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                13 seconds ago      Running             nginx                      0                   bf0353be175b1       nginx
	55bcb34233269       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   2b0297d6b3787       gcp-auth-89d5ffd79-nddfl
	bf82d54a88791       420193b27261a                                                                                                                11 minutes ago      Exited              patch                      1                   97343ddf50e4c       ingress-nginx-admission-patch-m95dm
	b81efcc4af4c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   a6ea812afd90c       ingress-nginx-admission-create-gqzb8
	4cc11308e8d1b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   8ccd5fcf41d44       local-path-provisioner-86d989889c-2csc5
	e39f91b2a1535       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   3d5832bf54f78       registry-proxy-hjk6w
	f368a10e69dbf       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   f2c4b7d5f776c       cloud-spanner-emulator-769b77f747-ltstq
	a2b5c634cec6b       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   a0edec73c2a83       registry-66c9cd494c-fwzqm
	1ad20502fb06d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   0de88e1e95b52       yakd-dashboard-67d98fc6b-2qlmm
	b517837ab24d5       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   3d707f61efb37       nvidia-device-plugin-daemonset-xr4jg
	95f0d666e58ef       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   23a40dbbbc578       storage-provisioner
	8d447c2f7a93d       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   7916c390cc086       coredns-7c65d6cfc9-pglr2
	fb9859218340b       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   b1321143210d9       kube-proxy-cbxhg
	47f58497c24b9       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   ea5a30ba790c7       kube-scheduler-addons-490000
	524887ab838f4       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   8f803880bdc18       kube-apiserver-addons-490000
	6b1f131e76a8d       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   466b2f783bea4       etcd-addons-490000
	9b7f7b965bf42       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   c2c2139fece0e       kube-controller-manager-addons-490000
	
	
	==> coredns [8d447c2f7a93] <==
	[INFO] 10.244.0.21:45097 - 20122 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032267s
	[INFO] 10.244.0.21:45097 - 8541 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040773s
	[INFO] 10.244.0.21:34486 - 32136 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000021386s
	[INFO] 10.244.0.21:45097 - 51954 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045942s
	[INFO] 10.244.0.21:45097 - 63953 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047109s
	[INFO] 10.244.0.21:34486 - 64385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012173s
	[INFO] 10.244.0.21:34486 - 985 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011798s
	[INFO] 10.244.0.21:45097 - 30535 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003998s
	[INFO] 10.244.0.21:34486 - 41996 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012799s
	[INFO] 10.244.0.21:34486 - 42059 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013007s
	[INFO] 10.244.0.21:34486 - 21978 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003456s
	[INFO] 10.244.0.21:38350 - 27402 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047234s
	[INFO] 10.244.0.21:47492 - 30777 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013716s
	[INFO] 10.244.0.21:47492 - 45145 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00002443s
	[INFO] 10.244.0.21:38350 - 3412 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017343s
	[INFO] 10.244.0.21:47492 - 19266 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017343s
	[INFO] 10.244.0.21:38350 - 28290 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011506s
	[INFO] 10.244.0.21:47492 - 54938 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012799s
	[INFO] 10.244.0.21:38350 - 21070 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009922s
	[INFO] 10.244.0.21:47492 - 46796 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010881s
	[INFO] 10.244.0.21:38350 - 57144 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010464s
	[INFO] 10.244.0.21:47492 - 42607 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010923s
	[INFO] 10.244.0.21:38350 - 21368 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010214s
	[INFO] 10.244.0.21:47492 - 10711 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000016467s
	[INFO] 10.244.0.21:38350 - 7301 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010047s
	
	
	==> describe nodes <==
	Name:               addons-490000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-490000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-490000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_20_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-490000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:20:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-490000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:33:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:29:22 +0000   Mon, 16 Sep 2024 10:20:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:29:22 +0000   Mon, 16 Sep 2024 10:20:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:29:22 +0000   Mon, 16 Sep 2024 10:20:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:29:22 +0000   Mon, 16 Sep 2024 10:20:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-490000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 058c508f7dc44f61a72407849d1f8614
	  System UUID:                058c508f7dc44f61a72407849d1f8614
	  Boot ID:                    c2263a49-57fb-4b41-ae06-58d0c98c7545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-ltstq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-fdgpf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  gcp-auth                    gcp-auth-89d5ffd79-nddfl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-pglr2                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-490000                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-490000               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-490000      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cbxhg                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-490000               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-xr4jg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-2csc5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2qlmm             0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-490000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-490000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-490000 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-490000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-490000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-490000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-490000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-490000 event: Registered Node addons-490000 in Controller
	
	
	==> dmesg <==
	[  +7.856840] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.664343] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.099078] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.765770] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.068605] kauditd_printk_skb: 27 callbacks suppressed
	[ +13.546670] kauditd_printk_skb: 5 callbacks suppressed
	[Sep16 10:22] kauditd_printk_skb: 34 callbacks suppressed
	[ +12.673923] kauditd_printk_skb: 6 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.406077] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.479513] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.199554] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.182315] kauditd_printk_skb: 7 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.167565] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.885873] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.586574] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.753888] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.504266] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.438606] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.253063] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:33] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.497770] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [6b1f131e76a8] <==
	{"level":"info","ts":"2024-09-16T10:20:37.878703Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-16T10:20:37.878722Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-16T10:20:38.657012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:20:38.657062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:20:38.657096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-16T10:20:38.657106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:20:38.657114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-16T10:20:38.657124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:20:38.657141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-16T10:20:38.658024Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:20:38.658296Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-490000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:20:38.658355Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:20:38.658427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:20:38.658469Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:20:38.658474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:20:38.658504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:20:38.658512Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:20:38.658483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:20:38.659071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:20:38.659071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:20:38.659693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-16T10:20:38.660116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:30:38.687914Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1866}
	{"level":"info","ts":"2024-09-16T10:30:38.780134Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1866,"took":"90.617054ms","hash":4032519234,"current-db-size-bytes":8835072,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4792320,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-16T10:30:38.780167Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4032519234,"revision":1866,"compact-revision":-1}
	
	
	==> gcp-auth [55bcb3423326] <==
	2024/09/16 10:23:26 GCP Auth Webhook started!
	2024/09/16 10:23:42 Ready to marshal response ...
	2024/09/16 10:23:42 Ready to write response ...
	2024/09/16 10:23:43 Ready to marshal response ...
	2024/09/16 10:23:43 Ready to write response ...
	2024/09/16 10:24:05 Ready to marshal response ...
	2024/09/16 10:24:05 Ready to write response ...
	2024/09/16 10:24:06 Ready to marshal response ...
	2024/09/16 10:24:06 Ready to write response ...
	2024/09/16 10:24:06 Ready to marshal response ...
	2024/09/16 10:24:06 Ready to write response ...
	2024/09/16 10:32:17 Ready to marshal response ...
	2024/09/16 10:32:17 Ready to write response ...
	2024/09/16 10:32:17 Ready to marshal response ...
	2024/09/16 10:32:17 Ready to write response ...
	2024/09/16 10:32:31 Ready to marshal response ...
	2024/09/16 10:32:31 Ready to write response ...
	2024/09/16 10:33:02 Ready to marshal response ...
	2024/09/16 10:33:02 Ready to write response ...
	2024/09/16 10:33:11 Ready to marshal response ...
	2024/09/16 10:33:11 Ready to write response ...
	
	
	==> kernel <==
	 10:33:18 up 13 min,  0 users,  load average: 0.43, 0.53, 0.40
	Linux addons-490000 5.10.207 #1 SMP PREEMPT Sun Sep 15 17:39:25 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [524887ab838f] <==
	I0916 10:23:56.779379       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0916 10:23:57.456346       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0916 10:23:57.529177       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0916 10:23:57.586221       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0916 10:23:57.616353       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0916 10:23:57.631152       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0916 10:23:57.787065       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0916 10:23:57.827550       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0916 10:32:24.283952       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0916 10:32:46.329931       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:32:46.329950       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:32:46.346326       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:32:46.346341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:32:46.356946       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:32:46.356972       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:32:46.370235       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:32:46.370251       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 10:32:47.345133       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0916 10:32:47.370350       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0916 10:32:47.461095       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0916 10:32:57.133027       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:32:58.145970       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:33:02.456222       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 10:33:02.556674       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.9.167"}
	I0916 10:33:11.917823       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.192.145"}
	
	
	==> kube-controller-manager [9b7f7b965bf4] <==
	I0916 10:33:07.206182       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0916 10:33:09.503442       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:09.503689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:10.174271       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:10.174399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:33:11.749166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.582383ms"
	I0916 10:33:11.751796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="2.5499ms"
	I0916 10:33:11.751901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.774µs"
	I0916 10:33:11.753777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.799µs"
	W0916 10:33:11.971952       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:11.971978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:33:12.646473       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0916 10:33:12.647984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="2.126µs"
	I0916 10:33:12.649389       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0916 10:33:13.859359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:13.859388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:33:14.528830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.15624ms"
	I0916 10:33:14.528956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.346µs"
	W0916 10:33:16.047584       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:16.047619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:33:16.402589       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:33:16.402683       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:16.775009       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:33:16.775109       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:17.631769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.459µs"
	
	
	==> kube-proxy [fb9859218340] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:20:47.454145       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:20:47.462110       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0916 10:20:47.462155       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:20:47.482702       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:20:47.482743       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:20:47.482764       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:20:47.483911       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:20:47.484046       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:20:47.484052       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:20:47.484998       1 config.go:199] "Starting service config controller"
	I0916 10:20:47.485005       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:20:47.485014       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:20:47.485016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:20:47.485211       1 config.go:328] "Starting node config controller"
	I0916 10:20:47.485214       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:20:47.585194       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:20:47.585229       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:20:47.585237       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [47f58497c24b] <==
	W0916 10:20:39.180110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:20:39.180591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:20:39.180874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:20:39.180949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:20:39.181083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:20:39.181126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:20:39.181213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:20:39.181277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:20:39.181338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:39.180215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:20:39.181354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:40.092135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:20:40.092233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:40.191517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:20:40.191672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:20:40.224509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:20:40.224533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:20:40.479328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:13 addons-490000 kubelet[2039]: I0916 10:33:13.038560    2039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9510eacb-66a5-41d6-a92a-30d43414c2a1" path="/var/lib/kubelet/pods/9510eacb-66a5-41d6-a92a-30d43414c2a1/volumes"
	Sep 16 10:33:15 addons-490000 kubelet[2039]: I0916 10:33:15.947180    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttw89\" (UniqueName: \"kubernetes.io/projected/01e3c9ec-2c02-417d-a5c3-5954828a3791-kube-api-access-ttw89\") pod \"01e3c9ec-2c02-417d-a5c3-5954828a3791\" (UID: \"01e3c9ec-2c02-417d-a5c3-5954828a3791\") "
	Sep 16 10:33:15 addons-490000 kubelet[2039]: I0916 10:33:15.947207    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01e3c9ec-2c02-417d-a5c3-5954828a3791-webhook-cert\") pod \"01e3c9ec-2c02-417d-a5c3-5954828a3791\" (UID: \"01e3c9ec-2c02-417d-a5c3-5954828a3791\") "
	Sep 16 10:33:15 addons-490000 kubelet[2039]: I0916 10:33:15.954519    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e3c9ec-2c02-417d-a5c3-5954828a3791-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "01e3c9ec-2c02-417d-a5c3-5954828a3791" (UID: "01e3c9ec-2c02-417d-a5c3-5954828a3791"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 16 10:33:15 addons-490000 kubelet[2039]: I0916 10:33:15.954587    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e3c9ec-2c02-417d-a5c3-5954828a3791-kube-api-access-ttw89" (OuterVolumeSpecName: "kube-api-access-ttw89") pod "01e3c9ec-2c02-417d-a5c3-5954828a3791" (UID: "01e3c9ec-2c02-417d-a5c3-5954828a3791"). InnerVolumeSpecName "kube-api-access-ttw89". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:16 addons-490000 kubelet[2039]: I0916 10:33:16.047808    2039 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01e3c9ec-2c02-417d-a5c3-5954828a3791-webhook-cert\") on node \"addons-490000\" DevicePath \"\""
	Sep 16 10:33:16 addons-490000 kubelet[2039]: I0916 10:33:16.047828    2039 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ttw89\" (UniqueName: \"kubernetes.io/projected/01e3c9ec-2c02-417d-a5c3-5954828a3791-kube-api-access-ttw89\") on node \"addons-490000\" DevicePath \"\""
	Sep 16 10:33:16 addons-490000 kubelet[2039]: I0916 10:33:16.557454    2039 scope.go:117] "RemoveContainer" containerID="7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a"
	Sep 16 10:33:16 addons-490000 kubelet[2039]: I0916 10:33:16.572434    2039 scope.go:117] "RemoveContainer" containerID="7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a"
	Sep 16 10:33:16 addons-490000 kubelet[2039]: E0916 10:33:16.573014    2039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a" containerID="7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a"
	Sep 16 10:33:16 addons-490000 kubelet[2039]: I0916 10:33:16.573048    2039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a"} err="failed to get container status \"7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7fa2b8e4c154d31bccc13252a58bd31b7014add40c0cf24870220c08620c2f3a"
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.059139    2039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e3c9ec-2c02-417d-a5c3-5954828a3791" path="/var/lib/kubelet/pods/01e3c9ec-2c02-417d-a5c3-5954828a3791/volumes"
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.417124    2039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-fdgpf" podStartSLOduration=4.665498862 podStartE2EDuration="6.417067251s" podCreationTimestamp="2024-09-16 10:33:11 +0000 UTC" firstStartedPulling="2024-09-16 10:33:12.158558236 +0000 UTC m=+751.174324685" lastFinishedPulling="2024-09-16 10:33:13.910126667 +0000 UTC m=+752.925893074" observedRunningTime="2024-09-16 10:33:14.524818544 +0000 UTC m=+753.540584993" watchObservedRunningTime="2024-09-16 10:33:17.417067251 +0000 UTC m=+756.432833741"
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.666837    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e326f3c2-08cd-4723-9b85-763f87bb6250-gcp-creds\") pod \"e326f3c2-08cd-4723-9b85-763f87bb6250\" (UID: \"e326f3c2-08cd-4723-9b85-763f87bb6250\") "
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.666872    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm4s\" (UniqueName: \"kubernetes.io/projected/e326f3c2-08cd-4723-9b85-763f87bb6250-kube-api-access-tzm4s\") pod \"e326f3c2-08cd-4723-9b85-763f87bb6250\" (UID: \"e326f3c2-08cd-4723-9b85-763f87bb6250\") "
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.667272    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e326f3c2-08cd-4723-9b85-763f87bb6250-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e326f3c2-08cd-4723-9b85-763f87bb6250" (UID: "e326f3c2-08cd-4723-9b85-763f87bb6250"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.670695    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326f3c2-08cd-4723-9b85-763f87bb6250-kube-api-access-tzm4s" (OuterVolumeSpecName: "kube-api-access-tzm4s") pod "e326f3c2-08cd-4723-9b85-763f87bb6250" (UID: "e326f3c2-08cd-4723-9b85-763f87bb6250"). InnerVolumeSpecName "kube-api-access-tzm4s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.767174    2039 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e326f3c2-08cd-4723-9b85-763f87bb6250-gcp-creds\") on node \"addons-490000\" DevicePath \"\""
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.767192    2039 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tzm4s\" (UniqueName: \"kubernetes.io/projected/e326f3c2-08cd-4723-9b85-763f87bb6250-kube-api-access-tzm4s\") on node \"addons-490000\" DevicePath \"\""
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.968460    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xf5\" (UniqueName: \"kubernetes.io/projected/997df24f-5154-460b-ab90-cfa8f452443b-kube-api-access-25xf5\") pod \"997df24f-5154-460b-ab90-cfa8f452443b\" (UID: \"997df24f-5154-460b-ab90-cfa8f452443b\") "
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.968486    2039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhgd4\" (UniqueName: \"kubernetes.io/projected/356ec898-bcc6-438e-88a6-3e2540fbe09a-kube-api-access-fhgd4\") pod \"356ec898-bcc6-438e-88a6-3e2540fbe09a\" (UID: \"356ec898-bcc6-438e-88a6-3e2540fbe09a\") "
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.969408    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997df24f-5154-460b-ab90-cfa8f452443b-kube-api-access-25xf5" (OuterVolumeSpecName: "kube-api-access-25xf5") pod "997df24f-5154-460b-ab90-cfa8f452443b" (UID: "997df24f-5154-460b-ab90-cfa8f452443b"). InnerVolumeSpecName "kube-api-access-25xf5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:17 addons-490000 kubelet[2039]: I0916 10:33:17.969920    2039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356ec898-bcc6-438e-88a6-3e2540fbe09a-kube-api-access-fhgd4" (OuterVolumeSpecName: "kube-api-access-fhgd4") pod "356ec898-bcc6-438e-88a6-3e2540fbe09a" (UID: "356ec898-bcc6-438e-88a6-3e2540fbe09a"). InnerVolumeSpecName "kube-api-access-fhgd4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:18 addons-490000 kubelet[2039]: I0916 10:33:18.069474    2039 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-25xf5\" (UniqueName: \"kubernetes.io/projected/997df24f-5154-460b-ab90-cfa8f452443b-kube-api-access-25xf5\") on node \"addons-490000\" DevicePath \"\""
	Sep 16 10:33:18 addons-490000 kubelet[2039]: I0916 10:33:18.069497    2039 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fhgd4\" (UniqueName: \"kubernetes.io/projected/356ec898-bcc6-438e-88a6-3e2540fbe09a-kube-api-access-fhgd4\") on node \"addons-490000\" DevicePath \"\""
	
	
	==> storage-provisioner [95f0d666e58e] <==
	I0916 10:20:48.291008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:20:48.296079       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:20:48.296098       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:20:48.304435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:20:48.305722       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-490000_ef47e489-3144-42e3-b5b9-cfff0d974053!
	I0916 10:20:48.308788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c851653a-1fdd-4f66-a0fe-9738da9f11b1", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-490000_ef47e489-3144-42e3-b5b9-cfff0d974053 became leader
	I0916 10:20:48.406670       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-490000_ef47e489-3144-42e3-b5b9-cfff0d974053!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-490000 -n addons-490000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-490000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-66c9cd494c-fwzqm registry-proxy-hjk6w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-490000 describe pod busybox registry-66c9cd494c-fwzqm registry-proxy-hjk6w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-490000 describe pod busybox registry-66c9cd494c-fwzqm registry-proxy-hjk6w: exit status 1 (42.076125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-490000/192.168.105.2
	Start Time:       Mon, 16 Sep 2024 03:24:06 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8ws6b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8ws6b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m12s                   default-scheduler  Successfully assigned default/busybox to addons-490000
	  Normal   Pulling    7m36s (x4 over 9m12s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m36s (x4 over 9m12s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m36s (x4 over 9m12s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m25s (x6 over 9m12s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m59s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66c9cd494c-fwzqm" not found
	Error from server (NotFound): pods "registry-proxy-hjk6w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-490000 describe pod busybox registry-66c9cd494c-fwzqm registry-proxy-hjk6w: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.32s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-779000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-779000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.89549s)

                                                
                                                
-- stdout --
	* [cert-options-779000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-779000" primary control-plane node in "cert-options-779000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-779000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-779000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-779000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-779000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-779000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.3635ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-779000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-779000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-779000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-779000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-779000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-779000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.585166ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-779000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-779000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-779000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-779000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-779000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 04:05:51.987458 -0700 PDT m=+2768.009968209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-779000 -n cert-options-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-779000 -n cert-options-779000: exit status 7 (30.826458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-779000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-779000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-779000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.090844125s)

                                                
                                                
-- stdout --
	* [cert-expiration-703000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.19459675s)

                                                
                                                
-- stdout --
	* [cert-expiration-703000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-703000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-16 04:08:52.02401 -0700 PDT m=+2948.050079543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-703000 -n cert-expiration-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-703000 -n cert-expiration-703000: exit status 7 (64.516334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-703000
--- FAIL: TestCertExpiration (195.44s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-354000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-354000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.89896525s)

                                                
                                                
-- stdout --
	* [docker-flags-354000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-354000" primary control-plane node in "docker-flags-354000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-354000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:05:31.829188    4547 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:05:31.829315    4547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:31.829318    4547 out.go:358] Setting ErrFile to fd 2...
	I0916 04:05:31.829321    4547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:31.829447    4547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:05:31.830613    4547 out.go:352] Setting JSON to false
	I0916 04:05:31.846693    4547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3894,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:05:31.846766    4547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:05:31.852535    4547 out.go:177] * [docker-flags-354000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:05:31.859405    4547 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:05:31.859481    4547 notify.go:220] Checking for updates...
	I0916 04:05:31.865854    4547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:05:31.869385    4547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:05:31.872432    4547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:05:31.875425    4547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:05:31.878431    4547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:05:31.881743    4547 config.go:182] Loaded profile config "force-systemd-flag-622000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:31.881808    4547 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:31.881856    4547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:05:31.886369    4547 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:05:31.893426    4547 start.go:297] selected driver: qemu2
	I0916 04:05:31.893436    4547 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:05:31.893444    4547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:05:31.895740    4547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:05:31.898429    4547 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:05:31.901473    4547 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0916 04:05:31.901489    4547 cni.go:84] Creating CNI manager for ""
	I0916 04:05:31.901514    4547 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:05:31.901522    4547 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:05:31.901549    4547 start.go:340] cluster config:
	{Name:docker-flags-354000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:05:31.905293    4547 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:05:31.912328    4547 out.go:177] * Starting "docker-flags-354000" primary control-plane node in "docker-flags-354000" cluster
	I0916 04:05:31.916419    4547 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:05:31.916436    4547 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:05:31.916448    4547 cache.go:56] Caching tarball of preloaded images
	I0916 04:05:31.916516    4547 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:05:31.916523    4547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:05:31.916597    4547 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/docker-flags-354000/config.json ...
	I0916 04:05:31.916610    4547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/docker-flags-354000/config.json: {Name:mkc0c2c439f87d7bcbe08a2ae63f449e5f6ea398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:05:31.916831    4547 start.go:360] acquireMachinesLock for docker-flags-354000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:31.916869    4547 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "docker-flags-354000"
	I0916 04:05:31.916880    4547 start.go:93] Provisioning new machine with config: &{Name:docker-flags-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:31.916914    4547 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:31.925397    4547 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:31.943553    4547 start.go:159] libmachine.API.Create for "docker-flags-354000" (driver="qemu2")
	I0916 04:05:31.943586    4547 client.go:168] LocalClient.Create starting
	I0916 04:05:31.943656    4547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:31.943692    4547 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:31.943702    4547 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:31.943750    4547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:31.943775    4547 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:31.943781    4547 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:31.944162    4547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:32.106529    4547 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:32.133836    4547 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:32.133842    4547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:32.134013    4547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:32.143049    4547 main.go:141] libmachine: STDOUT: 
	I0916 04:05:32.143073    4547 main.go:141] libmachine: STDERR: 
	I0916 04:05:32.143136    4547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2 +20000M
	I0916 04:05:32.150973    4547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:32.150985    4547 main.go:141] libmachine: STDERR: 
	I0916 04:05:32.151005    4547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:32.151010    4547 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:32.151022    4547 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:32.151050    4547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:88:b2:b6:50:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:32.152615    4547 main.go:141] libmachine: STDOUT: 
	I0916 04:05:32.152628    4547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:32.152653    4547 client.go:171] duration metric: took 209.064208ms to LocalClient.Create
	I0916 04:05:34.154767    4547 start.go:128] duration metric: took 2.237882875s to createHost
	I0916 04:05:34.154860    4547 start.go:83] releasing machines lock for "docker-flags-354000", held for 2.238021625s
	W0916 04:05:34.154918    4547 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:34.178876    4547 out.go:177] * Deleting "docker-flags-354000" in qemu2 ...
	W0916 04:05:34.203294    4547 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:34.203312    4547 start.go:729] Will try again in 5 seconds ...
	I0916 04:05:39.205464    4547 start.go:360] acquireMachinesLock for docker-flags-354000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:39.304650    4547 start.go:364] duration metric: took 99.047792ms to acquireMachinesLock for "docker-flags-354000"
	I0916 04:05:39.304768    4547 start.go:93] Provisioning new machine with config: &{Name:docker-flags-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-354000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:39.305083    4547 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:39.316547    4547 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:39.365137    4547 start.go:159] libmachine.API.Create for "docker-flags-354000" (driver="qemu2")
	I0916 04:05:39.365197    4547 client.go:168] LocalClient.Create starting
	I0916 04:05:39.365325    4547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:39.365380    4547 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:39.365394    4547 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:39.365463    4547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:39.365506    4547 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:39.365521    4547 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:39.366137    4547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:39.540663    4547 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:39.631313    4547 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:39.631318    4547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:39.631502    4547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:39.640860    4547 main.go:141] libmachine: STDOUT: 
	I0916 04:05:39.640873    4547 main.go:141] libmachine: STDERR: 
	I0916 04:05:39.640933    4547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2 +20000M
	I0916 04:05:39.648757    4547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:39.648783    4547 main.go:141] libmachine: STDERR: 
	I0916 04:05:39.648792    4547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:39.648797    4547 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:39.648806    4547 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:39.648831    4547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:0f:60:94:4a:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/docker-flags-354000/disk.qcow2
	I0916 04:05:39.650472    4547 main.go:141] libmachine: STDOUT: 
	I0916 04:05:39.650485    4547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:39.650498    4547 client.go:171] duration metric: took 285.299292ms to LocalClient.Create
	I0916 04:05:41.652738    4547 start.go:128] duration metric: took 2.347625792s to createHost
	I0916 04:05:41.652828    4547 start.go:83] releasing machines lock for "docker-flags-354000", held for 2.348174625s
	W0916 04:05:41.653207    4547 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:41.668917    4547 out.go:201] 
	W0916 04:05:41.674057    4547 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:05:41.674088    4547 out.go:270] * 
	* 
	W0916 04:05:41.676874    4547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:05:41.686836    4547 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-354000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-354000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-354000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.355041ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-354000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-354000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-354000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-354000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-354000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-354000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-354000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-354000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-354000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.631791ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-354000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-354000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-354000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-354000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-354000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-354000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-16 04:05:41.829789 -0700 PDT m=+2757.852098459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-354000 -n docker-flags-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-354000 -n docker-flags-354000: exit status 7 (28.6015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-354000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-354000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-354000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-622000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-622000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.846855166s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-622000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-622000" primary control-plane node in "force-systemd-flag-622000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-622000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:05:26.719475    4526 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:05:26.719603    4526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:26.719606    4526 out.go:358] Setting ErrFile to fd 2...
	I0916 04:05:26.719611    4526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:26.719765    4526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:05:26.720836    4526 out.go:352] Setting JSON to false
	I0916 04:05:26.736921    4526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3889,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:05:26.736992    4526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:05:26.744772    4526 out.go:177] * [force-systemd-flag-622000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:05:26.752819    4526 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:05:26.752847    4526 notify.go:220] Checking for updates...
	I0916 04:05:26.763723    4526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:05:26.765293    4526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:05:26.768692    4526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:05:26.771753    4526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:05:26.774777    4526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:05:26.778041    4526 config.go:182] Loaded profile config "force-systemd-env-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:26.778116    4526 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:26.778164    4526 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:05:26.781765    4526 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:05:26.788708    4526 start.go:297] selected driver: qemu2
	I0916 04:05:26.788713    4526 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:05:26.788719    4526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:05:26.791166    4526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:05:26.793785    4526 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:05:26.796829    4526 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 04:05:26.796843    4526 cni.go:84] Creating CNI manager for ""
	I0916 04:05:26.796868    4526 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:05:26.796877    4526 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:05:26.796908    4526 start.go:340] cluster config:
	{Name:force-systemd-flag-622000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:05:26.800932    4526 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:05:26.808742    4526 out.go:177] * Starting "force-systemd-flag-622000" primary control-plane node in "force-systemd-flag-622000" cluster
	I0916 04:05:26.812725    4526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:05:26.812739    4526 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:05:26.812747    4526 cache.go:56] Caching tarball of preloaded images
	I0916 04:05:26.812803    4526 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:05:26.812810    4526 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:05:26.812869    4526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/force-systemd-flag-622000/config.json ...
	I0916 04:05:26.812881    4526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/force-systemd-flag-622000/config.json: {Name:mkeeaad02d6066d9edf33647fed57c2603fbc091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:05:26.813100    4526 start.go:360] acquireMachinesLock for force-systemd-flag-622000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:26.813133    4526 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "force-systemd-flag-622000"
	I0916 04:05:26.813145    4526 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:26.813172    4526 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:26.819665    4526 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:26.837282    4526 start.go:159] libmachine.API.Create for "force-systemd-flag-622000" (driver="qemu2")
	I0916 04:05:26.837317    4526 client.go:168] LocalClient.Create starting
	I0916 04:05:26.837378    4526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:26.837409    4526 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:26.837418    4526 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:26.837456    4526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:26.837486    4526 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:26.837496    4526 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:26.837855    4526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:26.999403    4526 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:27.079971    4526 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:27.079977    4526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:27.080139    4526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:27.089131    4526 main.go:141] libmachine: STDOUT: 
	I0916 04:05:27.089147    4526 main.go:141] libmachine: STDERR: 
	I0916 04:05:27.089214    4526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2 +20000M
	I0916 04:05:27.097004    4526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:27.097022    4526 main.go:141] libmachine: STDERR: 
	I0916 04:05:27.097041    4526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:27.097047    4526 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:27.097059    4526 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:27.097086    4526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:be:f2:58:4c:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:27.098693    4526 main.go:141] libmachine: STDOUT: 
	I0916 04:05:27.098718    4526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:27.098738    4526 client.go:171] duration metric: took 261.419958ms to LocalClient.Create
	I0916 04:05:29.100870    4526 start.go:128] duration metric: took 2.287721625s to createHost
	I0916 04:05:29.100940    4526 start.go:83] releasing machines lock for "force-systemd-flag-622000", held for 2.287842625s
	W0916 04:05:29.101024    4526 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:29.117989    4526 out.go:177] * Deleting "force-systemd-flag-622000" in qemu2 ...
	W0916 04:05:29.145368    4526 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:29.145389    4526 start.go:729] Will try again in 5 seconds ...
	I0916 04:05:34.147521    4526 start.go:360] acquireMachinesLock for force-systemd-flag-622000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:34.154988    4526 start.go:364] duration metric: took 7.293333ms to acquireMachinesLock for "force-systemd-flag-622000"
	I0916 04:05:34.155105    4526 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:34.155372    4526 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:34.164873    4526 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:34.216329    4526 start.go:159] libmachine.API.Create for "force-systemd-flag-622000" (driver="qemu2")
	I0916 04:05:34.216377    4526 client.go:168] LocalClient.Create starting
	I0916 04:05:34.216498    4526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:34.216566    4526 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:34.216587    4526 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:34.216644    4526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:34.216690    4526 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:34.216701    4526 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:34.217256    4526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:34.409630    4526 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:34.463233    4526 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:34.463238    4526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:34.463411    4526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:34.472686    4526 main.go:141] libmachine: STDOUT: 
	I0916 04:05:34.472709    4526 main.go:141] libmachine: STDERR: 
	I0916 04:05:34.472765    4526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2 +20000M
	I0916 04:05:34.480526    4526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:34.480553    4526 main.go:141] libmachine: STDERR: 
	I0916 04:05:34.480564    4526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:34.480570    4526 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:34.480580    4526 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:34.480606    4526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c4:c9:0c:f1:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-flag-622000/disk.qcow2
	I0916 04:05:34.482235    4526 main.go:141] libmachine: STDOUT: 
	I0916 04:05:34.482251    4526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:34.482263    4526 client.go:171] duration metric: took 265.886834ms to LocalClient.Create
	I0916 04:05:36.484402    4526 start.go:128] duration metric: took 2.329043083s to createHost
	I0916 04:05:36.484475    4526 start.go:83] releasing machines lock for "force-systemd-flag-622000", held for 2.329511667s
	W0916 04:05:36.484870    4526 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:36.499510    4526 out.go:201] 
	W0916 04:05:36.513694    4526 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:05:36.513736    4526 out.go:270] * 
	* 
	W0916 04:05:36.516139    4526 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:05:36.525508    4526 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-622000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-622000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-622000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.53025ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-622000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-622000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-622000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-16 04:05:36.623774 -0700 PDT m=+2752.645980293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-622000 -n force-systemd-flag-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-622000 -n force-systemd-flag-622000: exit status 7 (33.743292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-622000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-622000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-622000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (12.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-899000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-899000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.223427417s)

                                                
                                                
-- stdout --
	* [force-systemd-env-899000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-899000" primary control-plane node in "force-systemd-env-899000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-899000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:05:19.413547    4494 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:05:19.413687    4494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:19.413690    4494 out.go:358] Setting ErrFile to fd 2...
	I0916 04:05:19.413693    4494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:05:19.413821    4494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:05:19.414867    4494 out.go:352] Setting JSON to false
	I0916 04:05:19.430914    4494 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3882,"bootTime":1726480837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:05:19.430991    4494 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:05:19.437050    4494 out.go:177] * [force-systemd-env-899000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:05:19.442963    4494 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:05:19.443014    4494 notify.go:220] Checking for updates...
	I0916 04:05:19.449954    4494 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:05:19.452923    4494 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:05:19.455895    4494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:05:19.458952    4494 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:05:19.461906    4494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0916 04:05:19.465233    4494 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:05:19.465294    4494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:05:19.467843    4494 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:05:19.478968    4494 start.go:297] selected driver: qemu2
	I0916 04:05:19.478976    4494 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:05:19.478983    4494 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:05:19.481329    4494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:05:19.484837    4494 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:05:19.488002    4494 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 04:05:19.488028    4494 cni.go:84] Creating CNI manager for ""
	I0916 04:05:19.488049    4494 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:05:19.488060    4494 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:05:19.488096    4494 start.go:340] cluster config:
	{Name:force-systemd-env-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:05:19.491902    4494 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:05:19.499945    4494 out.go:177] * Starting "force-systemd-env-899000" primary control-plane node in "force-systemd-env-899000" cluster
	I0916 04:05:19.503938    4494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:05:19.503958    4494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:05:19.503973    4494 cache.go:56] Caching tarball of preloaded images
	I0916 04:05:19.504058    4494 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:05:19.504065    4494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:05:19.504135    4494 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/force-systemd-env-899000/config.json ...
	I0916 04:05:19.504150    4494 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/force-systemd-env-899000/config.json: {Name:mk7fa65892f851270503a85b309f945eb6e1c36a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:05:19.504573    4494 start.go:360] acquireMachinesLock for force-systemd-env-899000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:19.504611    4494 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "force-systemd-env-899000"
	I0916 04:05:19.504623    4494 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:19.504660    4494 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:19.508911    4494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:19.526883    4494 start.go:159] libmachine.API.Create for "force-systemd-env-899000" (driver="qemu2")
	I0916 04:05:19.526908    4494 client.go:168] LocalClient.Create starting
	I0916 04:05:19.526981    4494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:19.527010    4494 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:19.527019    4494 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:19.527057    4494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:19.527080    4494 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:19.527089    4494 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:19.527492    4494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:19.689885    4494 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:19.779407    4494 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:19.779413    4494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:19.779600    4494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:19.788581    4494 main.go:141] libmachine: STDOUT: 
	I0916 04:05:19.788608    4494 main.go:141] libmachine: STDERR: 
	I0916 04:05:19.788667    4494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2 +20000M
	I0916 04:05:19.796556    4494 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:19.796570    4494 main.go:141] libmachine: STDERR: 
	I0916 04:05:19.796589    4494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:19.796608    4494 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:19.796623    4494 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:19.796650    4494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:60:4a:7a:0a:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:19.798247    4494 main.go:141] libmachine: STDOUT: 
	I0916 04:05:19.798260    4494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:19.798286    4494 client.go:171] duration metric: took 271.376958ms to LocalClient.Create
	I0916 04:05:21.800364    4494 start.go:128] duration metric: took 2.295743625s to createHost
	I0916 04:05:21.800378    4494 start.go:83] releasing machines lock for "force-systemd-env-899000", held for 2.295807541s
	W0916 04:05:21.800385    4494 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:21.809906    4494 out.go:177] * Deleting "force-systemd-env-899000" in qemu2 ...
	W0916 04:05:21.821432    4494 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:21.821449    4494 start.go:729] Will try again in 5 seconds ...
	I0916 04:05:26.822505    4494 start.go:360] acquireMachinesLock for force-systemd-env-899000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:29.101167    4494 start.go:364] duration metric: took 2.2786165s to acquireMachinesLock for "force-systemd-env-899000"
	I0916 04:05:29.101264    4494 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:29.101464    4494 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:05:29.111961    4494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:29.163529    4494 start.go:159] libmachine.API.Create for "force-systemd-env-899000" (driver="qemu2")
	I0916 04:05:29.163581    4494 client.go:168] LocalClient.Create starting
	I0916 04:05:29.163699    4494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:05:29.163768    4494 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:29.163784    4494 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:29.163842    4494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:05:29.163891    4494 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:29.163907    4494 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:29.164489    4494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:05:29.409787    4494 main.go:141] libmachine: Creating SSH key...
	I0916 04:05:29.529211    4494 main.go:141] libmachine: Creating Disk image...
	I0916 04:05:29.529217    4494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:05:29.529390    4494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:29.538494    4494 main.go:141] libmachine: STDOUT: 
	I0916 04:05:29.538509    4494 main.go:141] libmachine: STDERR: 
	I0916 04:05:29.538571    4494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2 +20000M
	I0916 04:05:29.546336    4494 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:05:29.546349    4494 main.go:141] libmachine: STDERR: 
	I0916 04:05:29.546360    4494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:29.546365    4494 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:05:29.546375    4494 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:05:29.546403    4494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:96:56:70:04:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/force-systemd-env-899000/disk.qcow2
	I0916 04:05:29.548047    4494 main.go:141] libmachine: STDOUT: 
	I0916 04:05:29.548066    4494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:05:29.548081    4494 client.go:171] duration metric: took 384.501208ms to LocalClient.Create
	I0916 04:05:31.550224    4494 start.go:128] duration metric: took 2.448777375s to createHost
	I0916 04:05:31.550285    4494 start.go:83] releasing machines lock for "force-systemd-env-899000", held for 2.449126125s
	W0916 04:05:31.550665    4494 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-899000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-899000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:05:31.572403    4494 out.go:201] 
	W0916 04:05:31.581445    4494 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:05:31.581475    4494 out.go:270] * 
	* 
	W0916 04:05:31.584085    4494 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:05:31.593337    4494 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-899000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-899000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-899000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.956042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-899000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-899000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-899000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-16 04:05:31.690315 -0700 PDT m=+2747.712424251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-899000 -n force-systemd-env-899000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-899000 -n force-systemd-env-899000: exit status 7 (36.5725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-899000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-899000
--- FAIL: TestForceSystemdEnv (12.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-926000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-926000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-hltsg" [d5b475a4-6c2a-4117-aede-ee307497e708] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-hltsg" [d5b475a4-6c2a-4117-aede-ee307497e708] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003857833s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32164
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32164: Get "http://192.168.105.4:32164": dial tcp 192.168.105.4:32164: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-926000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-hltsg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-926000/192.168.105.4
Start Time:       Mon, 16 Sep 2024 03:38:11 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://d2bf5f62e215eecffc7326f1e391c05833b1c409c20c68292684e79773896c4e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 16 Sep 2024 03:38:26 -0700
Finished:     Mon, 16 Sep 2024 03:38:26 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bwtz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4bwtz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-hltsg to functional-926000
Normal   Pulled     14s (x3 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 28s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 28s)  kubelet            Started container echoserver-arm
Warning  BackOff    0s (x3 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-hltsg_default(d5b475a4-6c2a-4117-aede-ee307497e708)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-926000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-926000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.166.140
IPs:                      10.109.166.140
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32164/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-926000 -n functional-926000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-926000 image ls                                                                                      | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:37 PDT | 16 Sep 24 03:37 PDT |
	| image   | functional-926000 image save                                                                                    | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:37 PDT | 16 Sep 24 03:38 PDT |
	|         | kicbase/echo-server:functional-926000                                                                           |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-926000 image rm                                                                                      | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | kicbase/echo-server:functional-926000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-926000 image ls                                                                                      | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	| image   | functional-926000 image load                                                                                    | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-926000 image ls                                                                                      | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	| image   | functional-926000 image save --daemon                                                                           | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | kicbase/echo-server:functional-926000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh echo                                                                                      | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh cat                                                                                       | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-926000 tunnel                                                                                        | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-926000 tunnel                                                                                        | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-926000 tunnel                                                                                        | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-926000 service list                                                                                  | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	| service | functional-926000 service list                                                                                  | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-926000 service                                                                                       | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-926000                                                                                               | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-926000 service                                                                                       | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-926000 addons list                                                                                   | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	| addons  | functional-926000 addons list                                                                                   | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-926000 service                                                                                       | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| mount   | -p functional-926000                                                                                            | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh findmnt                                                                                   | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh findmnt                                                                                   | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh -- ls                                                                                     | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-926000 ssh cat                                                                                       | functional-926000 | jenkins | v1.34.0 | 16 Sep 24 03:38 PDT | 16 Sep 24 03:38 PDT |
	|         | /mount-9p/test-1726483115945861000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:37:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:37:11.597659    2543 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:37:11.597785    2543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:37:11.597788    2543 out.go:358] Setting ErrFile to fd 2...
	I0916 03:37:11.597789    2543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:37:11.597896    2543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:37:11.599029    2543 out.go:352] Setting JSON to false
	I0916 03:37:11.615993    2543 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2194,"bootTime":1726480837,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:37:11.616065    2543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:37:11.620550    2543 out.go:177] * [functional-926000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:37:11.629516    2543 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:37:11.629555    2543 notify.go:220] Checking for updates...
	I0916 03:37:11.636537    2543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:37:11.639514    2543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:37:11.642527    2543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:37:11.645568    2543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:37:11.648478    2543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:37:11.651780    2543 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:37:11.651835    2543 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:37:11.656499    2543 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 03:37:11.663482    2543 start.go:297] selected driver: qemu2
	I0916 03:37:11.663487    2543 start.go:901] validating driver "qemu2" against &{Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:37:11.663561    2543 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:37:11.665889    2543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:37:11.665909    2543 cni.go:84] Creating CNI manager for ""
	I0916 03:37:11.665943    2543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:37:11.665990    2543 start.go:340] cluster config:
	{Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-926000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:37:11.669518    2543 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:37:11.677492    2543 out.go:177] * Starting "functional-926000" primary control-plane node in "functional-926000" cluster
	I0916 03:37:11.681503    2543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:37:11.681518    2543 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:37:11.681526    2543 cache.go:56] Caching tarball of preloaded images
	I0916 03:37:11.681592    2543 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 03:37:11.681595    2543 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:37:11.681655    2543 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/config.json ...
	I0916 03:37:11.682113    2543 start.go:360] acquireMachinesLock for functional-926000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:37:11.682145    2543 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "functional-926000"
	I0916 03:37:11.682154    2543 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:37:11.682157    2543 fix.go:54] fixHost starting: 
	I0916 03:37:11.682737    2543 fix.go:112] recreateIfNeeded on functional-926000: state=Running err=<nil>
	W0916 03:37:11.682744    2543 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:37:11.691523    2543 out.go:177] * Updating the running qemu2 "functional-926000" VM ...
	I0916 03:37:11.695541    2543 machine.go:93] provisionDockerMachine start ...
	I0916 03:37:11.695581    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:11.695702    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:11.695705    2543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:37:11.747386    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-926000
	
	I0916 03:37:11.747395    2543 buildroot.go:166] provisioning hostname "functional-926000"
	I0916 03:37:11.747447    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:11.747550    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:11.747554    2543 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-926000 && echo "functional-926000" | sudo tee /etc/hostname
	I0916 03:37:11.802391    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-926000
	
	I0916 03:37:11.802444    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:11.802570    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:11.802577    2543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-926000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-926000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-926000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:37:11.854910    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:37:11.854921    2543 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1133/.minikube}
	I0916 03:37:11.854928    2543 buildroot.go:174] setting up certificates
	I0916 03:37:11.854935    2543 provision.go:84] configureAuth start
	I0916 03:37:11.854939    2543 provision.go:143] copyHostCerts
	I0916 03:37:11.855008    2543 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem, removing ...
	I0916 03:37:11.855016    2543 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem
	I0916 03:37:11.855142    2543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem (1078 bytes)
	I0916 03:37:11.855324    2543 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem, removing ...
	I0916 03:37:11.855326    2543 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem
	I0916 03:37:11.855381    2543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem (1123 bytes)
	I0916 03:37:11.855485    2543 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem, removing ...
	I0916 03:37:11.855487    2543 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem
	I0916 03:37:11.855539    2543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem (1675 bytes)
	I0916 03:37:11.855621    2543 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem org=jenkins.functional-926000 san=[127.0.0.1 192.168.105.4 functional-926000 localhost minikube]
	I0916 03:37:12.112540    2543 provision.go:177] copyRemoteCerts
	I0916 03:37:12.112587    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:37:12.112595    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:12.141149    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 03:37:12.149786    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 03:37:12.157926    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 03:37:12.166080    2543 provision.go:87] duration metric: took 311.144709ms to configureAuth
	I0916 03:37:12.166087    2543 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:37:12.166215    2543 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:37:12.166256    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:12.166344    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:12.166347    2543 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:37:12.218092    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:37:12.218098    2543 buildroot.go:70] root file system type: tmpfs
	I0916 03:37:12.218150    2543 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:37:12.218209    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:12.218309    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:12.218339    2543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:37:12.273734    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:37:12.273794    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:12.273911    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:12.273922    2543 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:37:12.336620    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:37:12.336628    2543 machine.go:96] duration metric: took 641.103458ms to provisionDockerMachine
	I0916 03:37:12.336634    2543 start.go:293] postStartSetup for "functional-926000" (driver="qemu2")
	I0916 03:37:12.336640    2543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:37:12.336724    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:37:12.336733    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:12.365857    2543 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:37:12.367355    2543 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:37:12.367360    2543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/addons for local assets ...
	I0916 03:37:12.367440    2543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/files for local assets ...
	I0916 03:37:12.367556    2543 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0916 03:37:12.367674    2543 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/test/nested/copy/1652/hosts -> hosts in /etc/test/nested/copy/1652
	I0916 03:37:12.367718    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1652
	I0916 03:37:12.370906    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0916 03:37:12.379650    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/test/nested/copy/1652/hosts --> /etc/test/nested/copy/1652/hosts (40 bytes)
	I0916 03:37:12.388132    2543 start.go:296] duration metric: took 51.495375ms for postStartSetup
	I0916 03:37:12.388144    2543 fix.go:56] duration metric: took 706.010459ms for fixHost
	I0916 03:37:12.388189    2543 main.go:141] libmachine: Using SSH client type: native
	I0916 03:37:12.388297    2543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f29190] 0x102f2b9d0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0916 03:37:12.388300    2543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:37:12.438301    2543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483032.512200313
	
	I0916 03:37:12.438306    2543 fix.go:216] guest clock: 1726483032.512200313
	I0916 03:37:12.438309    2543 fix.go:229] Guest: 2024-09-16 03:37:12.512200313 -0700 PDT Remote: 2024-09-16 03:37:12.388145 -0700 PDT m=+0.809385667 (delta=124.055313ms)
	I0916 03:37:12.438319    2543 fix.go:200] guest clock delta is within tolerance: 124.055313ms
	I0916 03:37:12.438321    2543 start.go:83] releasing machines lock for "functional-926000", held for 756.197583ms
	I0916 03:37:12.438626    2543 ssh_runner.go:195] Run: cat /version.json
	I0916 03:37:12.438632    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:12.438637    2543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:37:12.438651    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:12.506093    2543 ssh_runner.go:195] Run: systemctl --version
	I0916 03:37:12.508337    2543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:37:12.510174    2543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:37:12.510200    2543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:37:12.513523    2543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 03:37:12.513527    2543 start.go:495] detecting cgroup driver to use...
	I0916 03:37:12.513591    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:37:12.519937    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:37:12.523954    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:37:12.528305    2543 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:37:12.528330    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:37:12.532080    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:37:12.536212    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:37:12.540102    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:37:12.544135    2543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:37:12.548129    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:37:12.552039    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:37:12.555907    2543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:37:12.559828    2543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:37:12.562959    2543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:37:12.566236    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:12.670712    2543 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:37:12.679352    2543 start.go:495] detecting cgroup driver to use...
	I0916 03:37:12.679426    2543 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:37:12.685944    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:37:12.691995    2543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:37:12.700703    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:37:12.706433    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:37:12.711680    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:37:12.718222    2543 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:37:12.719546    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:37:12.722792    2543 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:37:12.728845    2543 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:37:12.834149    2543 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:37:12.937024    2543 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:37:12.937070    2543 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:37:12.944098    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:13.045490    2543 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:37:25.432748    2543 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.387642291s)
	I0916 03:37:25.432831    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:37:25.438555    2543 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:37:25.446938    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:37:25.453084    2543 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:37:25.545760    2543 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:37:25.633622    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:25.721877    2543 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:37:25.728895    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:37:25.734152    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:25.829543    2543 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:37:25.857955    2543 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:37:25.858051    2543 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:37:25.860825    2543 start.go:563] Will wait 60s for crictl version
	I0916 03:37:25.860870    2543 ssh_runner.go:195] Run: which crictl
	I0916 03:37:25.862331    2543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:37:25.874627    2543 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:37:25.874714    2543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:37:25.882403    2543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:37:25.893520    2543 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:37:25.893686    2543 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0916 03:37:25.903478    2543 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 03:37:25.907509    2543 kubeadm.go:883] updating cluster {Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:37:25.907551    2543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:37:25.907598    2543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:37:25.913731    2543 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-926000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0916 03:37:25.913736    2543 docker.go:615] Images already preloaded, skipping extraction
	I0916 03:37:25.913804    2543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:37:25.919563    2543 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-926000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0916 03:37:25.919575    2543 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:37:25.919581    2543 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I0916 03:37:25.919638    2543 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-926000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:37:25.919700    2543 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:37:25.934799    2543 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 03:37:25.934809    2543 cni.go:84] Creating CNI manager for ""
	I0916 03:37:25.934815    2543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:37:25.934820    2543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:37:25.934833    2543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-926000 NodeName:functional-926000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:37:25.934899    2543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-926000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:37:25.934967    2543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:37:25.939274    2543 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:37:25.939301    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:37:25.943125    2543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 03:37:25.949095    2543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:37:25.955000    2543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0916 03:37:25.961203    2543 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0916 03:37:25.962591    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:26.059375    2543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:37:26.065752    2543 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000 for IP: 192.168.105.4
	I0916 03:37:26.065756    2543 certs.go:194] generating shared ca certs ...
	I0916 03:37:26.065764    2543 certs.go:226] acquiring lock for ca certs: {Name:mk7bbdd60870074cef3b6b7f58dae6ae1dc0ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:37:26.065940    2543 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key
	I0916 03:37:26.065989    2543 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key
	I0916 03:37:26.065993    2543 certs.go:256] generating profile certs ...
	I0916 03:37:26.066060    2543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.key
	I0916 03:37:26.066112    2543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/apiserver.key.ebc493e6
	I0916 03:37:26.066168    2543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/proxy-client.key
	I0916 03:37:26.066322    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem (1338 bytes)
	W0916 03:37:26.066353    2543 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0916 03:37:26.066357    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 03:37:26.066388    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem (1078 bytes)
	I0916 03:37:26.066405    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:37:26.066422    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem (1675 bytes)
	I0916 03:37:26.066457    2543 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0916 03:37:26.066830    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:37:26.075760    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:37:26.084720    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:37:26.092851    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:37:26.100850    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 03:37:26.108964    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 03:37:26.117176    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:37:26.125229    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 03:37:26.133326    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0916 03:37:26.141478    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0916 03:37:26.149908    2543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:37:26.158129    2543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:37:26.163895    2543 ssh_runner.go:195] Run: openssl version
	I0916 03:37:26.166021    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0916 03:37:26.169692    2543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0916 03:37:26.171293    2543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:35 /usr/share/ca-certificates/1652.pem
	I0916 03:37:26.171314    2543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0916 03:37:26.173275    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0916 03:37:26.176504    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0916 03:37:26.180313    2543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0916 03:37:26.181776    2543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:35 /usr/share/ca-certificates/16522.pem
	I0916 03:37:26.181796    2543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0916 03:37:26.183896    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:37:26.187562    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:37:26.191566    2543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:37:26.193038    2543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:37:26.193062    2543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:37:26.194965    2543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:37:26.198409    2543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:37:26.200232    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 03:37:26.202416    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 03:37:26.204791    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 03:37:26.206722    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 03:37:26.208739    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 03:37:26.210731    2543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 03:37:26.212804    2543 kubeadm.go:392] StartCluster: {Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:37:26.212882    2543 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:37:26.218834    2543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:37:26.222610    2543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 03:37:26.222620    2543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 03:37:26.222649    2543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 03:37:26.225744    2543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:37:26.226068    2543 kubeconfig.go:125] found "functional-926000" server: "https://192.168.105.4:8441"
	I0916 03:37:26.226742    2543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 03:37:26.230465    2543 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0916 03:37:26.230470    2543 kubeadm.go:1160] stopping kube-system containers ...
	I0916 03:37:26.230516    2543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:37:26.242051    2543 docker.go:483] Stopping containers: [f40d95f035e8 9ede09182962 989ee1ad5ed4 29b5f0d351ca fac597693427 4b0e74b65ef9 95b295742694 a873035c85ce 5e185df598b9 6bc77b50b776 f1c7a6897f15 2e22edfaeba4 029ab4d41356 10d2ff7c1c26 a381883eed93 11fdbeb3a766 27205dcbe8aa d77adbd810b7 184f529377fb b5fd69563674 4c6b688890ba a437feacff50 e9aadd5257be b629802ccb36 13bf849d3fb1 bd84455ec861 17a065f28d03 b342f141e2a8]
	I0916 03:37:26.242131    2543 ssh_runner.go:195] Run: docker stop f40d95f035e8 9ede09182962 989ee1ad5ed4 29b5f0d351ca fac597693427 4b0e74b65ef9 95b295742694 a873035c85ce 5e185df598b9 6bc77b50b776 f1c7a6897f15 2e22edfaeba4 029ab4d41356 10d2ff7c1c26 a381883eed93 11fdbeb3a766 27205dcbe8aa d77adbd810b7 184f529377fb b5fd69563674 4c6b688890ba a437feacff50 e9aadd5257be b629802ccb36 13bf849d3fb1 bd84455ec861 17a065f28d03 b342f141e2a8
	I0916 03:37:26.250341    2543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 03:37:26.362617    2543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:37:26.368585    2543 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 16 10:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 16 10:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 16 10:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 16 10:36 /etc/kubernetes/scheduler.conf
	
	I0916 03:37:26.368634    2543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0916 03:37:26.373556    2543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0916 03:37:26.377975    2543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0916 03:37:26.382278    2543 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:37:26.382310    2543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:37:26.386762    2543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0916 03:37:26.390764    2543 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:37:26.390791    2543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:37:26.394770    2543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:37:26.398656    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:26.416623    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:27.067555    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:27.188871    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:27.214037    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:27.238286    2543 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:37:27.238349    2543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:37:27.740410    2543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:37:28.239428    2543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:37:28.244752    2543 api_server.go:72] duration metric: took 1.006498s to wait for apiserver process to appear ...
	I0916 03:37:28.244759    2543 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:37:28.244771    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:30.700375    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 03:37:30.700383    2543 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 03:37:30.700390    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:30.741327    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:37:30.741341    2543 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:37:30.745575    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:30.748164    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:37:30.748169    2543 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:37:31.246883    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:31.272445    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:37:31.272476    2543 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:37:31.746721    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:31.749600    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:37:31.749607    2543 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:37:32.246814    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:32.261838    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0916 03:37:32.275716    2543 api_server.go:141] control plane version: v1.31.1
	I0916 03:37:32.275740    2543 api_server.go:131] duration metric: took 4.031101625s to wait for apiserver health ...
	I0916 03:37:32.275753    2543 cni.go:84] Creating CNI manager for ""
	I0916 03:37:32.275771    2543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:37:32.280269    2543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 03:37:32.283377    2543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 03:37:32.294726    2543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 03:37:32.308320    2543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:37:32.317462    2543 system_pods.go:59] 7 kube-system pods found
	I0916 03:37:32.317479    2543 system_pods.go:61] "coredns-7c65d6cfc9-tdsz8" [3d8b21e3-d50e-4839-bac4-5bad53c99024] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 03:37:32.317486    2543 system_pods.go:61] "etcd-functional-926000" [58795e8e-455e-4b53-9eb1-41f1df957ad7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 03:37:32.317491    2543 system_pods.go:61] "kube-apiserver-functional-926000" [d3f2df81-c663-45bb-91db-c35985a29cda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 03:37:32.317495    2543 system_pods.go:61] "kube-controller-manager-functional-926000" [d0b9a921-6b59-4def-806c-93be29a80b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 03:37:32.317498    2543 system_pods.go:61] "kube-proxy-bx2rn" [589b60c0-a240-4179-9ba6-25888ac83ffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 03:37:32.317501    2543 system_pods.go:61] "kube-scheduler-functional-926000" [6fec2efe-2de1-45fd-98d3-bc44051b4f45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 03:37:32.317505    2543 system_pods.go:61] "storage-provisioner" [d8164787-06a1-448f-bcc4-73d3ea30129f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 03:37:32.317508    2543 system_pods.go:74] duration metric: took 9.183042ms to wait for pod list to return data ...
	I0916 03:37:32.317514    2543 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:37:32.320257    2543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:37:32.320272    2543 node_conditions.go:123] node cpu capacity is 2
	I0916 03:37:32.320285    2543 node_conditions.go:105] duration metric: took 2.768041ms to run NodePressure ...
	I0916 03:37:32.320299    2543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:37:32.549714    2543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 03:37:32.552964    2543 kubeadm.go:739] kubelet initialised
	I0916 03:37:32.552971    2543 kubeadm.go:740] duration metric: took 3.24475ms waiting for restarted kubelet to initialise ...
	I0916 03:37:32.552977    2543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:37:32.556981    2543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:34.570950    2543 pod_ready.go:103] pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace has status "Ready":"False"
	I0916 03:37:37.071788    2543 pod_ready.go:103] pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace has status "Ready":"False"
	I0916 03:37:39.072473    2543 pod_ready.go:103] pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace has status "Ready":"False"
	I0916 03:37:41.063780    2543 pod_ready.go:93] pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:41.063791    2543 pod_ready.go:82] duration metric: took 8.507076042s for pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:41.063801    2543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:41.068348    2543 pod_ready.go:93] pod "etcd-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:41.068355    2543 pod_ready.go:82] duration metric: took 4.548125ms for pod "etcd-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:41.068363    2543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:43.081922    2543 pod_ready.go:103] pod "kube-apiserver-functional-926000" in "kube-system" namespace has status "Ready":"False"
	I0916 03:37:45.578856    2543 pod_ready.go:103] pod "kube-apiserver-functional-926000" in "kube-system" namespace has status "Ready":"False"
	I0916 03:37:47.085355    2543 pod_ready.go:93] pod "kube-apiserver-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.085379    2543 pod_ready.go:82] duration metric: took 6.01719925s for pod "kube-apiserver-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.085395    2543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.093163    2543 pod_ready.go:93] pod "kube-controller-manager-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.093174    2543 pod_ready.go:82] duration metric: took 7.708125ms for pod "kube-controller-manager-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.093184    2543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bx2rn" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.100911    2543 pod_ready.go:93] pod "kube-proxy-bx2rn" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.100923    2543 pod_ready.go:82] duration metric: took 7.732708ms for pod "kube-proxy-bx2rn" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.100931    2543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.105983    2543 pod_ready.go:93] pod "kube-scheduler-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.105990    2543 pod_ready.go:82] duration metric: took 5.052417ms for pod "kube-scheduler-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.105999    2543 pod_ready.go:39] duration metric: took 14.55348125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:37:47.106032    2543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:37:47.116067    2543 ops.go:34] apiserver oom_adj: -16
	I0916 03:37:47.116076    2543 kubeadm.go:597] duration metric: took 20.894120459s to restartPrimaryControlPlane
	I0916 03:37:47.116083    2543 kubeadm.go:394] duration metric: took 20.90395025s to StartCluster
	I0916 03:37:47.116100    2543 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:37:47.116291    2543 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:37:47.117034    2543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:37:47.117431    2543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:37:47.117450    2543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 03:37:47.117519    2543 addons.go:69] Setting storage-provisioner=true in profile "functional-926000"
	I0916 03:37:47.117544    2543 addons.go:234] Setting addon storage-provisioner=true in "functional-926000"
	W0916 03:37:47.117556    2543 addons.go:243] addon storage-provisioner should already be in state true
	I0916 03:37:47.117577    2543 host.go:66] Checking if "functional-926000" exists ...
	I0916 03:37:47.117571    2543 addons.go:69] Setting default-storageclass=true in profile "functional-926000"
	I0916 03:37:47.117595    2543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-926000"
	I0916 03:37:47.117635    2543 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:37:47.119346    2543 addons.go:234] Setting addon default-storageclass=true in "functional-926000"
	W0916 03:37:47.119353    2543 addons.go:243] addon default-storageclass should already be in state true
	I0916 03:37:47.119366    2543 host.go:66] Checking if "functional-926000" exists ...
	I0916 03:37:47.122464    2543 out.go:177] * Verifying Kubernetes components...
	I0916 03:37:47.123065    2543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:37:47.125623    2543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:37:47.125635    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:47.129343    2543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:37:47.134346    2543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:37:47.137404    2543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:37:47.137409    2543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:37:47.137416    2543 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
	I0916 03:37:47.248859    2543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:37:47.255593    2543 node_ready.go:35] waiting up to 6m0s for node "functional-926000" to be "Ready" ...
	I0916 03:37:47.256985    2543 node_ready.go:49] node "functional-926000" has status "Ready":"True"
	I0916 03:37:47.256993    2543 node_ready.go:38] duration metric: took 1.388167ms for node "functional-926000" to be "Ready" ...
	I0916 03:37:47.256995    2543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:37:47.259113    2543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:37:47.259504    2543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.305437    2543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:37:47.472353    2543 pod_ready.go:93] pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.472359    2543 pod_ready.go:82] duration metric: took 212.858209ms for pod "coredns-7c65d6cfc9-tdsz8" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.472364    2543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.597168    2543 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 03:37:47.604728    2543 addons.go:510] duration metric: took 487.3035ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 03:37:47.875538    2543 pod_ready.go:93] pod "etcd-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:47.875552    2543 pod_ready.go:82] duration metric: took 403.195833ms for pod "etcd-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:47.875564    2543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:48.276854    2543 pod_ready.go:93] pod "kube-apiserver-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:48.276869    2543 pod_ready.go:82] duration metric: took 401.309625ms for pod "kube-apiserver-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:48.276880    2543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:48.680094    2543 pod_ready.go:93] pod "kube-controller-manager-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:48.680115    2543 pod_ready.go:82] duration metric: took 403.237375ms for pod "kube-controller-manager-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:48.680136    2543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bx2rn" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:49.078951    2543 pod_ready.go:93] pod "kube-proxy-bx2rn" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:49.078975    2543 pod_ready.go:82] duration metric: took 398.840041ms for pod "kube-proxy-bx2rn" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:49.078995    2543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:49.479554    2543 pod_ready.go:93] pod "kube-scheduler-functional-926000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:37:49.479584    2543 pod_ready.go:82] duration metric: took 400.584666ms for pod "kube-scheduler-functional-926000" in "kube-system" namespace to be "Ready" ...
	I0916 03:37:49.479605    2543 pod_ready.go:39] duration metric: took 2.222672125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:37:49.479642    2543 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:37:49.479967    2543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:37:49.498135    2543 api_server.go:72] duration metric: took 2.380757083s to wait for apiserver process to appear ...
	I0916 03:37:49.498149    2543 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:37:49.498176    2543 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0916 03:37:49.504575    2543 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0916 03:37:49.505616    2543 api_server.go:141] control plane version: v1.31.1
	I0916 03:37:49.505625    2543 api_server.go:131] duration metric: took 7.472208ms to wait for apiserver health ...
	I0916 03:37:49.505632    2543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:37:49.687243    2543 system_pods.go:59] 7 kube-system pods found
	I0916 03:37:49.687280    2543 system_pods.go:61] "coredns-7c65d6cfc9-tdsz8" [3d8b21e3-d50e-4839-bac4-5bad53c99024] Running
	I0916 03:37:49.687292    2543 system_pods.go:61] "etcd-functional-926000" [58795e8e-455e-4b53-9eb1-41f1df957ad7] Running
	I0916 03:37:49.687299    2543 system_pods.go:61] "kube-apiserver-functional-926000" [d3f2df81-c663-45bb-91db-c35985a29cda] Running
	I0916 03:37:49.687316    2543 system_pods.go:61] "kube-controller-manager-functional-926000" [d0b9a921-6b59-4def-806c-93be29a80b19] Running
	I0916 03:37:49.687320    2543 system_pods.go:61] "kube-proxy-bx2rn" [589b60c0-a240-4179-9ba6-25888ac83ffa] Running
	I0916 03:37:49.687325    2543 system_pods.go:61] "kube-scheduler-functional-926000" [6fec2efe-2de1-45fd-98d3-bc44051b4f45] Running
	I0916 03:37:49.687330    2543 system_pods.go:61] "storage-provisioner" [d8164787-06a1-448f-bcc4-73d3ea30129f] Running
	I0916 03:37:49.687339    2543 system_pods.go:74] duration metric: took 181.705708ms to wait for pod list to return data ...
	I0916 03:37:49.687352    2543 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:37:49.881052    2543 default_sa.go:45] found service account: "default"
	I0916 03:37:49.881086    2543 default_sa.go:55] duration metric: took 193.730292ms for default service account to be created ...
	I0916 03:37:49.881105    2543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:37:50.085499    2543 system_pods.go:86] 7 kube-system pods found
	I0916 03:37:50.085563    2543 system_pods.go:89] "coredns-7c65d6cfc9-tdsz8" [3d8b21e3-d50e-4839-bac4-5bad53c99024] Running
	I0916 03:37:50.085576    2543 system_pods.go:89] "etcd-functional-926000" [58795e8e-455e-4b53-9eb1-41f1df957ad7] Running
	I0916 03:37:50.085582    2543 system_pods.go:89] "kube-apiserver-functional-926000" [d3f2df81-c663-45bb-91db-c35985a29cda] Running
	I0916 03:37:50.085587    2543 system_pods.go:89] "kube-controller-manager-functional-926000" [d0b9a921-6b59-4def-806c-93be29a80b19] Running
	I0916 03:37:50.085592    2543 system_pods.go:89] "kube-proxy-bx2rn" [589b60c0-a240-4179-9ba6-25888ac83ffa] Running
	I0916 03:37:50.085597    2543 system_pods.go:89] "kube-scheduler-functional-926000" [6fec2efe-2de1-45fd-98d3-bc44051b4f45] Running
	I0916 03:37:50.085601    2543 system_pods.go:89] "storage-provisioner" [d8164787-06a1-448f-bcc4-73d3ea30129f] Running
	I0916 03:37:50.085614    2543 system_pods.go:126] duration metric: took 204.503417ms to wait for k8s-apps to be running ...
	I0916 03:37:50.085626    2543 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:37:50.085796    2543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:37:50.106599    2543 system_svc.go:56] duration metric: took 20.971708ms WaitForService to wait for kubelet
	I0916 03:37:50.106614    2543 kubeadm.go:582] duration metric: took 2.989258333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:37:50.106634    2543 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:37:50.280642    2543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:37:50.280661    2543 node_conditions.go:123] node cpu capacity is 2
	I0916 03:37:50.280682    2543 node_conditions.go:105] duration metric: took 174.044625ms to run NodePressure ...
	I0916 03:37:50.280703    2543 start.go:241] waiting for startup goroutines ...
	I0916 03:37:50.280719    2543 start.go:246] waiting for cluster config update ...
	I0916 03:37:50.280739    2543 start.go:255] writing updated cluster config ...
	I0916 03:37:50.282053    2543 ssh_runner.go:195] Run: rm -f paused
	I0916 03:37:50.345488    2543 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:37:50.349634    2543 out.go:201] 
	W0916 03:37:50.353649    2543 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:37:50.356708    2543 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:37:50.363684    2543 out.go:177] * Done! kubectl is now configured to use "functional-926000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:38:28 functional-926000 cri-dockerd[5938]: time="2024-09-16T10:38:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71ce15d800f548a15d79cf1d5ce9203129243ad3cc9444d725dfdae483fcb989/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 16 10:38:29 functional-926000 cri-dockerd[5938]: time="2024-09-16T10:38:29Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 16 10:38:29 functional-926000 dockerd[5683]: time="2024-09-16T10:38:29.457873426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:38:29 functional-926000 dockerd[5683]: time="2024-09-16T10:38:29.457905922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:38:29 functional-926000 dockerd[5683]: time="2024-09-16T10:38:29.458005825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:29 functional-926000 dockerd[5683]: time="2024-09-16T10:38:29.458265458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:37 functional-926000 dockerd[5683]: time="2024-09-16T10:38:37.108482442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:38:37 functional-926000 dockerd[5683]: time="2024-09-16T10:38:37.112316819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:38:37 functional-926000 dockerd[5683]: time="2024-09-16T10:38:37.112337441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:37 functional-926000 dockerd[5683]: time="2024-09-16T10:38:37.112395059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:37 functional-926000 cri-dockerd[5938]: time="2024-09-16T10:38:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ee9358701d6c1abb5fd50d9a260a6f4905a8d0a9575ec3bccddd421c55778e0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 16 10:38:38 functional-926000 cri-dockerd[5938]: time="2024-09-16T10:38:38Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.769282775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.769315187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.769489457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.769565905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:38:38 functional-926000 dockerd[5676]: time="2024-09-16T10:38:38.801271010Z" level=info msg="ignoring event" container=3c51f903bafc950b00913da694230eb14da7555ba15a13e3e6d372fd1087212c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.801334876Z" level=info msg="shim disconnected" id=3c51f903bafc950b00913da694230eb14da7555ba15a13e3e6d372fd1087212c namespace=moby
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.801363248Z" level=warning msg="cleaning up after shim disconnected" id=3c51f903bafc950b00913da694230eb14da7555ba15a13e3e6d372fd1087212c namespace=moby
	Sep 16 10:38:38 functional-926000 dockerd[5683]: time="2024-09-16T10:38:38.801367289Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:38:40 functional-926000 dockerd[5676]: time="2024-09-16T10:38:40.421729913Z" level=info msg="ignoring event" container=3ee9358701d6c1abb5fd50d9a260a6f4905a8d0a9575ec3bccddd421c55778e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:38:40 functional-926000 dockerd[5683]: time="2024-09-16T10:38:40.422783028Z" level=info msg="shim disconnected" id=3ee9358701d6c1abb5fd50d9a260a6f4905a8d0a9575ec3bccddd421c55778e0 namespace=moby
	Sep 16 10:38:40 functional-926000 dockerd[5683]: time="2024-09-16T10:38:40.422826981Z" level=warning msg="cleaning up after shim disconnected" id=3ee9358701d6c1abb5fd50d9a260a6f4905a8d0a9575ec3bccddd421c55778e0 namespace=moby
	Sep 16 10:38:40 functional-926000 dockerd[5683]: time="2024-09-16T10:38:40.422831439Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:38:40 functional-926000 dockerd[5683]: time="2024-09-16T10:38:40.428214957Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:38:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3c51f903bafc9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 seconds ago        Exited              mount-munger              0                   3ee9358701d6c       busybox-mount
	e8fb7e6192f8b       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         11 seconds ago       Running             myfrontend                0                   71ce15d800f54       sp-pod
	d2bf5f62e215e       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   da7195056eaad       hello-node-connect-65d86f57f4-hltsg
	5aa86ff037a16       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   3c62ace40008e       hello-node-64b4f8f9ff-5t4tq
	6bcc6ed648f23       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         35 seconds ago       Running             nginx                     0                   d109f62cdbc43       nginx-svc
	fe7cbcc8bf85b       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   c54e1d7a15629       coredns-7c65d6cfc9-tdsz8
	1387034fac94d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   684dbf6a7eda4       storage-provisioner
	11ba7ecc57aac       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   4b19f7d5ee20c       kube-proxy-bx2rn
	40c7f64d79ccb       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   18f3155347d3e       etcd-functional-926000
	5eb1f39c8fd6b       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   81a2188b5cff4       kube-scheduler-functional-926000
	61a1d6a44cbfb       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   e1c3b293999f0       kube-controller-manager-functional-926000
	45b24189b8b8e       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   b4f6a67457d0b       kube-apiserver-functional-926000
	f40d95f035e8a       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   29b5f0d351cad       coredns-7c65d6cfc9-tdsz8
	9ede091829624       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   4b0e74b65ef95       storage-provisioner
	989ee1ad5ed4e       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   fac5976934277       kube-proxy-bx2rn
	a873035c85ce3       279f381cb3736                                                                                         About a minute ago   Exited              kube-controller-manager   1                   f1c7a6897f15a       kube-controller-manager-functional-926000
	5e185df598b99       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   10d2ff7c1c26f       etcd-functional-926000
	6bc77b50b776d       7f8aa378bb47d                                                                                         About a minute ago   Exited              kube-scheduler            1                   2e22edfaeba48       kube-scheduler-functional-926000
	
	
	==> coredns [f40d95f035e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46728 - 26615 "HINFO IN 3098145131932830188.7097477306974365659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.399287794s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fe7cbcc8bf85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50371 - 11536 "HINFO IN 4981262191056884814.523307535254315818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.074451828s
	[INFO] 10.244.0.1:65524 - 50069 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000095029s
	[INFO] 10.244.0.1:5694 - 27344 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00008928s
	[INFO] 10.244.0.1:45501 - 16559 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000035245s
	[INFO] 10.244.0.1:52219 - 45199 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001244082s
	[INFO] 10.244.0.1:58181 - 5846 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000063658s
	[INFO] 10.244.0.1:32685 - 15196 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000116818s
	
	
	==> describe nodes <==
	Name:               functional-926000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-926000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-926000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_36_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:36:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-926000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:38:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:38:32 +0000   Mon, 16 Sep 2024 10:36:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:38:32 +0000   Mon, 16 Sep 2024 10:36:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:38:32 +0000   Mon, 16 Sep 2024 10:36:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:38:32 +0000   Mon, 16 Sep 2024 10:36:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-926000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 52b9dbb1120d47d1835bca186fd35f9c
	  System UUID:                52b9dbb1120d47d1835bca186fd35f9c
	  Boot ID:                    53d3a22e-3f46-44b7-ac97-fce0dbe42086
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-5t4tq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     hello-node-connect-65d86f57f4-hltsg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7c65d6cfc9-tdsz8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m24s
	  kube-system                 etcd-functional-926000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m31s
	  kube-system                 kube-apiserver-functional-926000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-functional-926000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-bx2rn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-functional-926000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m24s                kube-proxy       
	  Normal  Starting                 68s                  kube-proxy       
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m30s                kubelet          Node functional-926000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m30s                kubelet          Node functional-926000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s                kubelet          Node functional-926000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m30s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m26s                kubelet          Node functional-926000 status is now: NodeReady
	  Normal  RegisteredNode           2m25s                node-controller  Node functional-926000 event: Registered Node functional-926000 in Controller
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node functional-926000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node functional-926000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node functional-926000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                 node-controller  Node functional-926000 event: Registered Node functional-926000 in Controller
	  Normal  Starting                 73s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)    kubelet          Node functional-926000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)    kubelet          Node functional-926000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)    kubelet          Node functional-926000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                  node-controller  Node functional-926000 event: Registered Node functional-926000 in Controller
	
	
	==> dmesg <==
	[  +0.220897] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.843223] systemd-fstab-generator[3847]: Ignoring "noauto" option for root device
	[  +4.432197] kauditd_printk_skb: 199 callbacks suppressed
	[Sep16 10:37] systemd-fstab-generator[4751]: Ignoring "noauto" option for root device
	[  +0.056866] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.906931] systemd-fstab-generator[5200]: Ignoring "noauto" option for root device
	[  +0.053908] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111523] systemd-fstab-generator[5234]: Ignoring "noauto" option for root device
	[  +0.103920] systemd-fstab-generator[5246]: Ignoring "noauto" option for root device
	[  +0.110731] systemd-fstab-generator[5260]: Ignoring "noauto" option for root device
	[  +5.100852] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.414120] systemd-fstab-generator[5891]: Ignoring "noauto" option for root device
	[  +0.088002] systemd-fstab-generator[5903]: Ignoring "noauto" option for root device
	[  +0.088030] systemd-fstab-generator[5915]: Ignoring "noauto" option for root device
	[  +0.104420] systemd-fstab-generator[5930]: Ignoring "noauto" option for root device
	[  +0.228128] systemd-fstab-generator[6095]: Ignoring "noauto" option for root device
	[  +1.124045] systemd-fstab-generator[6216]: Ignoring "noauto" option for root device
	[  +1.241287] kauditd_printk_skb: 189 callbacks suppressed
	[  +5.991499] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.811418] systemd-fstab-generator[7229]: Ignoring "noauto" option for root device
	[  +6.490968] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:38] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.054479] kauditd_printk_skb: 27 callbacks suppressed
	[ +14.068753] kauditd_printk_skb: 38 callbacks suppressed
	[ +16.730904] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [40c7f64d79cc] <==
	{"level":"info","ts":"2024-09-16T10:37:28.654420Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:37:28.654479Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:37:28.654507Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:37:28.655611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:37:28.656190Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:37:28.656354Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:37:28.656382Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:37:28.656279Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T10:37:28.656420Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T10:37:30.235366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:37:30.235543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:37:30.235644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T10:37:30.235811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:37:30.235951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-16T10:37:30.236004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:37:30.236228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-16T10:37:30.242396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:37:30.242741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:37:30.242376Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-926000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:37:30.243362Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:37:30.243546Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:37:30.245865Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:37:30.246095Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:37:30.248628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:37:30.248940Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [5e185df598b9] <==
	{"level":"info","ts":"2024-09-16T10:36:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:36:44.873807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-16T10:36:44.873837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:44.873854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:44.873890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:44.873966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:44.876953Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:44.877276Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:44.877564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:44.877610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:44.876949Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-926000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:44.879468Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:44.879468Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:44.882084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-16T10:36:44.883927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:37:13.136101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:37:13.136128Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-926000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-16T10:37:13.136178Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:37:13.136220Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:37:13.142578Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:37:13.142595Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:37:13.142614Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-16T10:37:13.143966Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T10:37:13.143992Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T10:37:13.143995Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-926000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 10:38:40 up 2 min,  0 users,  load average: 0.66, 0.44, 0.18
	Linux functional-926000 5.10.207 #1 SMP PREEMPT Sun Sep 15 17:39:25 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [45b24189b8b8] <==
	I0916 10:37:30.855612       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:37:30.857591       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:37:30.858051       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:37:30.863916       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:37:30.863961       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:37:30.863973       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:37:30.863981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:37:30.863990       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:37:30.867209       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:37:30.874728       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:37:30.874742       1 policy_source.go:224] refreshing policies
	I0916 10:37:30.886882       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:37:31.765517       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:37:32.430866       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:37:32.434814       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:37:32.447584       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:37:32.468428       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:37:32.470364       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:37:34.433798       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:37:34.532455       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:37:51.851484       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.84.133"}
	I0916 10:37:57.193742       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:37:57.236468       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.245.74"}
	I0916 10:38:01.270229       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.215.238"}
	I0916 10:38:11.704440       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.166.140"}
	
	
	==> kube-controller-manager [61a1d6a44cbf] <==
	I0916 10:37:34.748311       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:37:34.780365       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:37:34.780472       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:37:40.725925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.797544ms"
	I0916 10:37:40.726544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.452µs"
	I0916 10:37:55.027054       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="default/invalid-svc" err="EndpointSlice informer cache is out of date"
	I0916 10:37:57.203549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.221705ms"
	I0916 10:37:57.212907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.982143ms"
	I0916 10:37:57.216562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="3.612629ms"
	I0916 10:37:57.216593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="13.415µs"
	I0916 10:38:02.797279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="28.497µs"
	I0916 10:38:03.817308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.952µs"
	I0916 10:38:04.819322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.788µs"
	I0916 10:38:11.669713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.681652ms"
	I0916 10:38:11.673346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="3.603098ms"
	I0916 10:38:11.673389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.372µs"
	I0916 10:38:11.678632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="17.665µs"
	I0916 10:38:12.965359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="76.24µs"
	I0916 10:38:13.998300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="47.535µs"
	I0916 10:38:17.034506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="42.661µs"
	I0916 10:38:26.342802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="96.654µs"
	I0916 10:38:27.168910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="35.329µs"
	I0916 10:38:32.330613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="64.7µs"
	I0916 10:38:32.410263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-926000"
	I0916 10:38:40.337947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="85.448µs"
	
	
	==> kube-controller-manager [a873035c85ce] <==
	I0916 10:36:48.734599       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:36:48.736771       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:36:48.736812       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:36:48.737901       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:36:48.738985       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:36:48.744186       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:36:48.744281       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:36:48.745256       1 shared_informer.go:320] Caches are synced for GC
	I0916 10:36:48.746334       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:36:48.746402       1 shared_informer.go:320] Caches are synced for job
	I0916 10:36:48.747519       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:36:48.748635       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:36:48.749481       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:36:48.749521       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:36:48.799824       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:36:48.849522       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:36:48.906918       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0916 10:36:48.917404       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:36:48.918522       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:48.950800       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:49.107901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="361.547967ms"
	I0916 10:36:49.108657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.191µs"
	I0916 10:36:49.360178       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:49.449066       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:49.449099       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [11ba7ecc57aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:37:31.918382       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:37:31.922474       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0916 10:37:31.922507       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:37:31.930076       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:37:31.930090       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:37:31.930101       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:37:31.930699       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:37:31.930799       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:37:31.930807       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:37:31.931235       1 config.go:199] "Starting service config controller"
	I0916 10:37:31.931243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:37:31.931253       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:37:31.931255       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:37:31.931427       1 config.go:328] "Starting node config controller"
	I0916 10:37:31.931429       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:37:32.033190       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:37:32.033191       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:37:32.033213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [989ee1ad5ed4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:46.885522       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:46.900460       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0916 10:36:46.900493       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:46.932516       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:46.932536       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:46.932550       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:46.933644       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:46.933740       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:46.933744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:46.934339       1 config.go:199] "Starting service config controller"
	I0916 10:36:46.934345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:46.934354       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:46.934356       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:46.934478       1 config.go:328] "Starting node config controller"
	I0916 10:36:46.934480       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:47.035346       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:36:47.035346       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:47.035373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5eb1f39c8fd6] <==
	I0916 10:37:28.272271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:37:30.771491       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:37:30.771508       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:37:30.771513       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:37:30.771517       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:37:30.804930       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:37:30.805058       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:37:30.805970       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:37:30.806039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:37:30.806072       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:37:30.806102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:37:30.906543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6bc77b50b776] <==
	E0916 10:36:45.426855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.426903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:36:45.426926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.426979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:36:45.427006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.427038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:36:45.427071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.427107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:36:45.427128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.427172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:36:45.427198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.427240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:36:45.427261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.432468       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:36:45.432986       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:36:45.433338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:36:45.433385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.433449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:36:45.433519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:36:45.433742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0916 10:36:45.433863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:36:45.433888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:36:45.433871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 10:36:47.021498       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:37:13.162556       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.407722    6223 scope.go:117] "RemoveContainer" containerID="95b2957426940e6f04b0c21bf035b75e962346588caab724aa274fee779a2ae4"
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.423410    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg7g\" (UniqueName: \"kubernetes.io/projected/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-kube-api-access-dmg7g\") pod \"4b07630c-4bec-4a6f-ade7-819c24f5d7bd\" (UID: \"4b07630c-4bec-4a6f-ade7-819c24f5d7bd\") "
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.423444    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762\") pod \"4b07630c-4bec-4a6f-ade7-819c24f5d7bd\" (UID: \"4b07630c-4bec-4a6f-ade7-819c24f5d7bd\") "
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.423477    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762" (OuterVolumeSpecName: "mypd") pod "4b07630c-4bec-4a6f-ade7-819c24f5d7bd" (UID: "4b07630c-4bec-4a6f-ade7-819c24f5d7bd"). InnerVolumeSpecName "pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.426059    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-kube-api-access-dmg7g" (OuterVolumeSpecName: "kube-api-access-dmg7g") pod "4b07630c-4bec-4a6f-ade7-819c24f5d7bd" (UID: "4b07630c-4bec-4a6f-ade7-819c24f5d7bd"). InnerVolumeSpecName "kube-api-access-dmg7g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.526066    6223 reconciler_common.go:288] "Volume detached for volume \"pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762\" (UniqueName: \"kubernetes.io/host-path/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762\") on node \"functional-926000\" DevicePath \"\""
	Sep 16 10:38:27 functional-926000 kubelet[6223]: I0916 10:38:27.526080    6223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dmg7g\" (UniqueName: \"kubernetes.io/projected/4b07630c-4bec-4a6f-ade7-819c24f5d7bd-kube-api-access-dmg7g\") on node \"functional-926000\" DevicePath \"\""
	Sep 16 10:38:28 functional-926000 kubelet[6223]: E0916 10:38:28.290363    6223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b07630c-4bec-4a6f-ade7-819c24f5d7bd" containerName="myfrontend"
	Sep 16 10:38:28 functional-926000 kubelet[6223]: I0916 10:38:28.290393    6223 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b07630c-4bec-4a6f-ade7-819c24f5d7bd" containerName="myfrontend"
	Sep 16 10:38:28 functional-926000 kubelet[6223]: I0916 10:38:28.434400    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgftp\" (UniqueName: \"kubernetes.io/projected/cb649964-0d08-4673-867f-69538a034794-kube-api-access-jgftp\") pod \"sp-pod\" (UID: \"cb649964-0d08-4673-867f-69538a034794\") " pod="default/sp-pod"
	Sep 16 10:38:28 functional-926000 kubelet[6223]: I0916 10:38:28.434476    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762\" (UniqueName: \"kubernetes.io/host-path/cb649964-0d08-4673-867f-69538a034794-pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762\") pod \"sp-pod\" (UID: \"cb649964-0d08-4673-867f-69538a034794\") " pod="default/sp-pod"
	Sep 16 10:38:29 functional-926000 kubelet[6223]: I0916 10:38:29.330479    6223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b07630c-4bec-4a6f-ade7-819c24f5d7bd" path="/var/lib/kubelet/pods/4b07630c-4bec-4a6f-ade7-819c24f5d7bd/volumes"
	Sep 16 10:38:32 functional-926000 kubelet[6223]: I0916 10:38:32.323072    6223 scope.go:117] "RemoveContainer" containerID="5aa86ff037a163447bb838b45ae770e8a0e6c23076dee75a0b3cb2b2bfea656b"
	Sep 16 10:38:32 functional-926000 kubelet[6223]: E0916 10:38:32.323818    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-5t4tq_default(ab892f22-2000-4667-9107-70fc4e704051)\"" pod="default/hello-node-64b4f8f9ff-5t4tq" podUID="ab892f22-2000-4667-9107-70fc4e704051"
	Sep 16 10:38:32 functional-926000 kubelet[6223]: I0916 10:38:32.331071    6223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.59833535 podStartE2EDuration="4.331050777s" podCreationTimestamp="2024-09-16 10:38:28 +0000 UTC" firstStartedPulling="2024-09-16 10:38:28.679749565 +0000 UTC m=+61.423328125" lastFinishedPulling="2024-09-16 10:38:29.412464992 +0000 UTC m=+62.156043552" observedRunningTime="2024-09-16 10:38:30.25936421 +0000 UTC m=+63.002942770" watchObservedRunningTime="2024-09-16 10:38:32.331050777 +0000 UTC m=+65.074629337"
	Sep 16 10:38:36 functional-926000 kubelet[6223]: I0916 10:38:36.931248    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5d64114c-a1aa-463e-9064-23270fd00e07-test-volume\") pod \"busybox-mount\" (UID: \"5d64114c-a1aa-463e-9064-23270fd00e07\") " pod="default/busybox-mount"
	Sep 16 10:38:36 functional-926000 kubelet[6223]: I0916 10:38:36.931293    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmqc8\" (UniqueName: \"kubernetes.io/projected/5d64114c-a1aa-463e-9064-23270fd00e07-kube-api-access-tmqc8\") pod \"busybox-mount\" (UID: \"5d64114c-a1aa-463e-9064-23270fd00e07\") " pod="default/busybox-mount"
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.323578    6223 scope.go:117] "RemoveContainer" containerID="d2bf5f62e215eecffc7326f1e391c05833b1c409c20c68292684e79773896c4e"
	Sep 16 10:38:40 functional-926000 kubelet[6223]: E0916 10:38:40.323856    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-hltsg_default(d5b475a4-6c2a-4117-aede-ee307497e708)\"" pod="default/hello-node-connect-65d86f57f4-hltsg" podUID="d5b475a4-6c2a-4117-aede-ee307497e708"
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.476420    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmqc8\" (UniqueName: \"kubernetes.io/projected/5d64114c-a1aa-463e-9064-23270fd00e07-kube-api-access-tmqc8\") pod \"5d64114c-a1aa-463e-9064-23270fd00e07\" (UID: \"5d64114c-a1aa-463e-9064-23270fd00e07\") "
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.476438    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5d64114c-a1aa-463e-9064-23270fd00e07-test-volume\") pod \"5d64114c-a1aa-463e-9064-23270fd00e07\" (UID: \"5d64114c-a1aa-463e-9064-23270fd00e07\") "
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.476477    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d64114c-a1aa-463e-9064-23270fd00e07-test-volume" (OuterVolumeSpecName: "test-volume") pod "5d64114c-a1aa-463e-9064-23270fd00e07" (UID: "5d64114c-a1aa-463e-9064-23270fd00e07"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.477310    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d64114c-a1aa-463e-9064-23270fd00e07-kube-api-access-tmqc8" (OuterVolumeSpecName: "kube-api-access-tmqc8") pod "5d64114c-a1aa-463e-9064-23270fd00e07" (UID: "5d64114c-a1aa-463e-9064-23270fd00e07"). InnerVolumeSpecName "kube-api-access-tmqc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.577130    6223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tmqc8\" (UniqueName: \"kubernetes.io/projected/5d64114c-a1aa-463e-9064-23270fd00e07-kube-api-access-tmqc8\") on node \"functional-926000\" DevicePath \"\""
	Sep 16 10:38:40 functional-926000 kubelet[6223]: I0916 10:38:40.577150    6223 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5d64114c-a1aa-463e-9064-23270fd00e07-test-volume\") on node \"functional-926000\" DevicePath \"\""
	
	
	==> storage-provisioner [1387034fac94] <==
	I0916 10:37:31.866507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:37:31.877850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:37:31.878039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:37:49.315857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:37:49.316643       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-926000_34eb0c0f-e2b5-4fa9-a3ba-6c25058f82fa!
	I0916 10:37:49.316816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aab8056a-dec4-4ed6-b51d-b4e53c49f4a2", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-926000_34eb0c0f-e2b5-4fa9-a3ba-6c25058f82fa became leader
	I0916 10:37:49.422266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-926000_34eb0c0f-e2b5-4fa9-a3ba-6c25058f82fa!
	I0916 10:38:14.906659       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0916 10:38:14.906688       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    413f554c-d3cc-4e32-9440-150d09b43d52 310 0 2024-09-16 10:36:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-16 10:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9b1ab11e-0068-41c3-8a43-d9b441e6d762 722 0 2024-09-16 10:38:14 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-16 10:38:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-16 10:38:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0916 10:38:14.907194       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762" provisioned
	I0916 10:38:14.907208       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0916 10:38:14.907212       1 volume_store.go:212] Trying to save persistentvolume "pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762"
	I0916 10:38:14.909187       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9b1ab11e-0068-41c3-8a43-d9b441e6d762", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0916 10:38:14.915536       1 volume_store.go:219] persistentvolume "pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762" saved
	I0916 10:38:14.915798       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9b1ab11e-0068-41c3-8a43-d9b441e6d762", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9b1ab11e-0068-41c3-8a43-d9b441e6d762
	
	
	==> storage-provisioner [9ede09182962] <==
	I0916 10:36:46.814545       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:46.823390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:46.823411       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:46.827901       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:46.827976       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-926000_71e51ee1-5f49-4507-a4ec-eb91c9302da1!
	I0916 10:36:46.828474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aab8056a-dec4-4ed6-b51d-b4e53c49f4a2", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-926000_71e51ee1-5f49-4507-a4ec-eb91c9302da1 became leader
	I0916 10:36:46.928621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-926000_71e51ee1-5f49-4507-a4ec-eb91c9302da1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-926000 -n functional-926000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-926000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-926000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-926000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-926000/192.168.105.4
	Start Time:       Mon, 16 Sep 2024 03:38:36 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://3c51f903bafc950b00913da694230eb14da7555ba15a13e3e6d372fd1087212c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 16 Sep 2024 03:38:38 -0700
	      Finished:     Mon, 16 Sep 2024 03:38:38 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tmqc8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tmqc8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-926000
	  Normal  Pulling    4s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.569s (1.569s including waiting). Image size: 3547125 bytes.
	  Normal  Created    3s    kubelet            Created container mount-munger
	  Normal  Started    3s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 node stop m02 -v=7 --alsologtostderr
E0916 03:42:57.491825    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:57.815311    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:58.458832    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:59.740954    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:43:02.302537    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:43:07.425812    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-574000 node stop m02 -v=7 --alsologtostderr: (12.197141917s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
E0916 03:43:17.669023    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:43:27.609829    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:43:38.151032    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:43:55.336374    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:44:19.113414    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:45:41.034337    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr: exit status 7 (2m56.013275709s)

                                                
                                                
-- stdout --
	ha-574000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-574000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-574000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:43:09.705724    3240 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:43:09.706165    3240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:43:09.706179    3240 out.go:358] Setting ErrFile to fd 2...
	I0916 03:43:09.706183    3240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:43:09.706430    3240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:43:09.706694    3240 out.go:352] Setting JSON to false
	I0916 03:43:09.706716    3240 mustload.go:65] Loading cluster: ha-574000
	I0916 03:43:09.706770    3240 notify.go:220] Checking for updates...
	I0916 03:43:09.707348    3240 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:43:09.707358    3240 status.go:255] checking status of ha-574000 ...
	I0916 03:43:09.708252    3240 status.go:330] ha-574000 host status = "Running" (err=<nil>)
	I0916 03:43:09.708260    3240 host.go:66] Checking if "ha-574000" exists ...
	I0916 03:43:09.708370    3240 host.go:66] Checking if "ha-574000" exists ...
	I0916 03:43:09.708498    3240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:43:09.708507    3240 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/id_rsa Username:docker}
	W0916 03:43:35.657020    3240 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0916 03:43:35.657313    3240 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 03:43:35.657336    3240 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 03:43:35.657348    3240 status.go:257] ha-574000 status: &{Name:ha-574000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:43:35.657370    3240 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 03:43:35.657381    3240 status.go:255] checking status of ha-574000-m02 ...
	I0916 03:43:35.657765    3240 status.go:330] ha-574000-m02 host status = "Stopped" (err=<nil>)
	I0916 03:43:35.657778    3240 status.go:343] host is not running, skipping remaining checks
	I0916 03:43:35.657781    3240 status.go:257] ha-574000-m02 status: &{Name:ha-574000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:43:35.657788    3240 status.go:255] checking status of ha-574000-m03 ...
	I0916 03:43:35.659164    3240 status.go:330] ha-574000-m03 host status = "Running" (err=<nil>)
	I0916 03:43:35.659200    3240 host.go:66] Checking if "ha-574000-m03" exists ...
	I0916 03:43:35.659412    3240 host.go:66] Checking if "ha-574000-m03" exists ...
	I0916 03:43:35.659604    3240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:43:35.659616    3240 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m03/id_rsa Username:docker}
	W0916 03:44:50.659803    3240 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0916 03:44:50.659858    3240 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0916 03:44:50.659876    3240 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 03:44:50.659879    3240 status.go:257] ha-574000-m03 status: &{Name:ha-574000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:44:50.659895    3240 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 03:44:50.659899    3240 status.go:255] checking status of ha-574000-m04 ...
	I0916 03:44:50.660866    3240 status.go:330] ha-574000-m04 host status = "Running" (err=<nil>)
	I0916 03:44:50.660874    3240 host.go:66] Checking if "ha-574000-m04" exists ...
	I0916 03:44:50.660995    3240 host.go:66] Checking if "ha-574000-m04" exists ...
	I0916 03:44:50.661122    3240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:44:50.661130    3240 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m04/id_rsa Username:docker}
	W0916 03:46:05.651459    3240 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0916 03:46:05.651510    3240 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0916 03:46:05.651519    3240 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0916 03:46:05.651523    3240 status.go:257] ha-574000-m04 status: &{Name:ha-574000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:46:05.651532    3240 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-574000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 3 (25.960702167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:46:31.589679    3266 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 03:46:31.589690    3266 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.944486667s)
ha_test.go:413: expected profile "ha-574000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-574000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-574000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-574000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
E0916 03:47:57.118547    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 3 (25.96517025s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:48:14.486626    3290 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 03:48:14.486667    3290 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.12403775s)

                                                
                                                
-- stdout --
	* Starting "ha-574000-m02" control-plane node in "ha-574000" cluster
	* Restarting existing qemu2 VM for "ha-574000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-574000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:48:14.562783    3295 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:48:14.563116    3295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:48:14.563120    3295 out.go:358] Setting ErrFile to fd 2...
	I0916 03:48:14.563123    3295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:48:14.563287    3295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:48:14.563582    3295 mustload.go:65] Loading cluster: ha-574000
	I0916 03:48:14.563873    3295 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 03:48:14.564200    3295 host.go:58] "ha-574000-m02" host status: Stopped
	I0916 03:48:14.568659    3295 out.go:177] * Starting "ha-574000-m02" control-plane node in "ha-574000" cluster
	I0916 03:48:14.571678    3295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:48:14.571696    3295 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:48:14.571709    3295 cache.go:56] Caching tarball of preloaded images
	I0916 03:48:14.571853    3295 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 03:48:14.571880    3295 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:48:14.571979    3295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/ha-574000/config.json ...
	I0916 03:48:14.572641    3295 start.go:360] acquireMachinesLock for ha-574000-m02: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:48:14.572717    3295 start.go:364] duration metric: took 37.458µs to acquireMachinesLock for "ha-574000-m02"
	I0916 03:48:14.572727    3295 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:48:14.572733    3295 fix.go:54] fixHost starting: m02
	I0916 03:48:14.572875    3295 fix.go:112] recreateIfNeeded on ha-574000-m02: state=Stopped err=<nil>
	W0916 03:48:14.572882    3295 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:48:14.577575    3295 out.go:177] * Restarting existing qemu2 VM for "ha-574000-m02" ...
	I0916 03:48:14.582652    3295 qemu.go:418] Using hvf for hardware acceleration
	I0916 03:48:14.582708    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:92:f4:d0:6f:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/disk.qcow2
	I0916 03:48:14.585889    3295 main.go:141] libmachine: STDOUT: 
	I0916 03:48:14.585910    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 03:48:14.585944    3295 fix.go:56] duration metric: took 13.210292ms for fixHost
	I0916 03:48:14.585950    3295 start.go:83] releasing machines lock for "ha-574000-m02", held for 13.228292ms
	W0916 03:48:14.585956    3295 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 03:48:14.585993    3295 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 03:48:14.585999    3295 start.go:729] Will try again in 5 seconds ...
	I0916 03:48:19.587912    3295 start.go:360] acquireMachinesLock for ha-574000-m02: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:48:19.588101    3295 start.go:364] duration metric: took 151.125µs to acquireMachinesLock for "ha-574000-m02"
	I0916 03:48:19.588147    3295 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:48:19.588151    3295 fix.go:54] fixHost starting: m02
	I0916 03:48:19.588317    3295 fix.go:112] recreateIfNeeded on ha-574000-m02: state=Stopped err=<nil>
	W0916 03:48:19.588326    3295 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:48:19.591742    3295 out.go:177] * Restarting existing qemu2 VM for "ha-574000-m02" ...
	I0916 03:48:19.595966    3295 qemu.go:418] Using hvf for hardware acceleration
	I0916 03:48:19.596012    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:92:f4:d0:6f:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/disk.qcow2
	I0916 03:48:19.598249    3295 main.go:141] libmachine: STDOUT: 
	I0916 03:48:19.598268    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 03:48:19.598288    3295 fix.go:56] duration metric: took 10.137459ms for fixHost
	I0916 03:48:19.598292    3295 start.go:83] releasing machines lock for "ha-574000-m02", held for 10.18075ms
	W0916 03:48:19.598332    3295 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 03:48:19.601910    3295 out.go:201] 
	W0916 03:48:19.605833    3295 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 03:48:19.605838    3295 out.go:270] * 
	* 
	W0916 03:48:19.607650    3295 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:48:19.611931    3295 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0916 03:48:14.562783    3295 out.go:345] Setting OutFile to fd 1 ...
I0916 03:48:14.563116    3295 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:48:14.563120    3295 out.go:358] Setting ErrFile to fd 2...
I0916 03:48:14.563123    3295 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:48:14.563287    3295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:48:14.563582    3295 mustload.go:65] Loading cluster: ha-574000
I0916 03:48:14.563873    3295 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0916 03:48:14.564200    3295 host.go:58] "ha-574000-m02" host status: Stopped
I0916 03:48:14.568659    3295 out.go:177] * Starting "ha-574000-m02" control-plane node in "ha-574000" cluster
I0916 03:48:14.571678    3295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0916 03:48:14.571696    3295 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0916 03:48:14.571709    3295 cache.go:56] Caching tarball of preloaded images
I0916 03:48:14.571853    3295 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0916 03:48:14.571880    3295 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0916 03:48:14.571979    3295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/ha-574000/config.json ...
I0916 03:48:14.572641    3295 start.go:360] acquireMachinesLock for ha-574000-m02: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0916 03:48:14.572717    3295 start.go:364] duration metric: took 37.458µs to acquireMachinesLock for "ha-574000-m02"
I0916 03:48:14.572727    3295 start.go:96] Skipping create...Using existing machine configuration
I0916 03:48:14.572733    3295 fix.go:54] fixHost starting: m02
I0916 03:48:14.572875    3295 fix.go:112] recreateIfNeeded on ha-574000-m02: state=Stopped err=<nil>
W0916 03:48:14.572882    3295 fix.go:138] unexpected machine state, will restart: <nil>
I0916 03:48:14.577575    3295 out.go:177] * Restarting existing qemu2 VM for "ha-574000-m02" ...
I0916 03:48:14.582652    3295 qemu.go:418] Using hvf for hardware acceleration
I0916 03:48:14.582708    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:92:f4:d0:6f:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/disk.qcow2
I0916 03:48:14.585889    3295 main.go:141] libmachine: STDOUT: 
I0916 03:48:14.585910    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0916 03:48:14.585944    3295 fix.go:56] duration metric: took 13.210292ms for fixHost
I0916 03:48:14.585950    3295 start.go:83] releasing machines lock for "ha-574000-m02", held for 13.228292ms
W0916 03:48:14.585956    3295 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0916 03:48:14.585993    3295 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0916 03:48:14.585999    3295 start.go:729] Will try again in 5 seconds ...
I0916 03:48:19.587912    3295 start.go:360] acquireMachinesLock for ha-574000-m02: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0916 03:48:19.588101    3295 start.go:364] duration metric: took 151.125µs to acquireMachinesLock for "ha-574000-m02"
I0916 03:48:19.588147    3295 start.go:96] Skipping create...Using existing machine configuration
I0916 03:48:19.588151    3295 fix.go:54] fixHost starting: m02
I0916 03:48:19.588317    3295 fix.go:112] recreateIfNeeded on ha-574000-m02: state=Stopped err=<nil>
W0916 03:48:19.588326    3295 fix.go:138] unexpected machine state, will restart: <nil>
I0916 03:48:19.591742    3295 out.go:177] * Restarting existing qemu2 VM for "ha-574000-m02" ...
I0916 03:48:19.595966    3295 qemu.go:418] Using hvf for hardware acceleration
I0916 03:48:19.596012    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:92:f4:d0:6f:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m02/disk.qcow2
I0916 03:48:19.598249    3295 main.go:141] libmachine: STDOUT: 
I0916 03:48:19.598268    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0916 03:48:19.598288    3295 fix.go:56] duration metric: took 10.137459ms for fixHost
I0916 03:48:19.598292    3295 start.go:83] releasing machines lock for "ha-574000-m02", held for 10.18075ms
W0916 03:48:19.598332    3295 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0916 03:48:19.601910    3295 out.go:201] 
W0916 03:48:19.605833    3295 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0916 03:48:19.605838    3295 out.go:270] * 
* 
W0916 03:48:19.607650    3295 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 03:48:19.611931    3295 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-574000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
E0916 03:48:24.835890    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:48:27.563357    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr: exit status 7 (2m57.771010708s)

                                                
                                                
-- stdout --
	ha-574000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-574000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-574000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:48:19.647730    3299 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:48:19.648137    3299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:48:19.648143    3299 out.go:358] Setting ErrFile to fd 2...
	I0916 03:48:19.648146    3299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:48:19.648284    3299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:48:19.648405    3299 out.go:352] Setting JSON to false
	I0916 03:48:19.648414    3299 mustload.go:65] Loading cluster: ha-574000
	I0916 03:48:19.648480    3299 notify.go:220] Checking for updates...
	I0916 03:48:19.648640    3299 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:48:19.648646    3299 status.go:255] checking status of ha-574000 ...
	I0916 03:48:19.649356    3299 status.go:330] ha-574000 host status = "Running" (err=<nil>)
	I0916 03:48:19.649363    3299 host.go:66] Checking if "ha-574000" exists ...
	I0916 03:48:19.649452    3299 host.go:66] Checking if "ha-574000" exists ...
	I0916 03:48:19.649572    3299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:48:19.649581    3299 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/id_rsa Username:docker}
	W0916 03:48:19.649775    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:19.649790    3299 retry.go:31] will retry after 131.652294ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 03:48:19.783590    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:19.783609    3299 retry.go:31] will retry after 475.982536ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 03:48:20.261727    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:20.261748    3299 retry.go:31] will retry after 808.318189ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 03:48:21.072230    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:21.072303    3299 retry.go:31] will retry after 140.397132ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:21.214759    3299 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/id_rsa Username:docker}
	W0916 03:48:21.215034    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 03:48:21.215046    3299 retry.go:31] will retry after 234.08074ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 03:48:47.374043    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0916 03:48:47.374097    3299 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 03:48:47.374105    3299 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 03:48:47.374108    3299 status.go:257] ha-574000 status: &{Name:ha-574000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:48:47.374119    3299 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 03:48:47.374122    3299 status.go:255] checking status of ha-574000-m02 ...
	I0916 03:48:47.374328    3299 status.go:330] ha-574000-m02 host status = "Stopped" (err=<nil>)
	I0916 03:48:47.374334    3299 status.go:343] host is not running, skipping remaining checks
	I0916 03:48:47.374336    3299 status.go:257] ha-574000-m02 status: &{Name:ha-574000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:48:47.374340    3299 status.go:255] checking status of ha-574000-m03 ...
	I0916 03:48:47.375044    3299 status.go:330] ha-574000-m03 host status = "Running" (err=<nil>)
	I0916 03:48:47.375051    3299 host.go:66] Checking if "ha-574000-m03" exists ...
	I0916 03:48:47.375164    3299 host.go:66] Checking if "ha-574000-m03" exists ...
	I0916 03:48:47.375302    3299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:48:47.375308    3299 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m03/id_rsa Username:docker}
	W0916 03:50:02.374971    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0916 03:50:02.375073    3299 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0916 03:50:02.375092    3299 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 03:50:02.375102    3299 status.go:257] ha-574000-m03 status: &{Name:ha-574000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:50:02.375121    3299 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 03:50:02.375132    3299 status.go:255] checking status of ha-574000-m04 ...
	I0916 03:50:02.376796    3299 status.go:330] ha-574000-m04 host status = "Running" (err=<nil>)
	I0916 03:50:02.376815    3299 host.go:66] Checking if "ha-574000-m04" exists ...
	I0916 03:50:02.377125    3299 host.go:66] Checking if "ha-574000-m04" exists ...
	I0916 03:50:02.377451    3299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:50:02.377468    3299 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000-m04/id_rsa Username:docker}
	W0916 03:51:17.377705    3299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0916 03:51:17.377856    3299 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0916 03:51:17.377883    3299 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0916 03:51:17.377898    3299 status.go:257] ha-574000-m04 status: &{Name:ha-574000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 03:51:17.377930    3299 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 3 (25.981854375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:51:43.360460    3324 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 03:51:43.360479    3324 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-574000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-574000 -v=7 --alsologtostderr
E0916 03:53:27.552227    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:54:50.640084    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-574000 -v=7 --alsologtostderr: (3m49.016203958s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-574000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-574000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.222901709s)

                                                
                                                
-- stdout --
	* [ha-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-574000" primary control-plane node in "ha-574000" cluster
	* Restarting existing qemu2 VM for "ha-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:56:50.442192    3433 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:56:50.442359    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:50.442363    3433 out.go:358] Setting ErrFile to fd 2...
	I0916 03:56:50.442369    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:50.442527    3433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:56:50.443634    3433 out.go:352] Setting JSON to false
	I0916 03:56:50.464227    3433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3373,"bootTime":1726480837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:56:50.464302    3433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:56:50.468692    3433 out.go:177] * [ha-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:56:50.476518    3433 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:56:50.476537    3433 notify.go:220] Checking for updates...
	I0916 03:56:50.483607    3433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:56:50.486527    3433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:56:50.489573    3433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:56:50.492667    3433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:56:50.495640    3433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:56:50.498983    3433 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:56:50.499048    3433 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:56:50.503586    3433 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 03:56:50.510569    3433 start.go:297] selected driver: qemu2
	I0916 03:56:50.510578    3433 start.go:901] validating driver "qemu2" against &{Name:ha-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-574000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:56:50.510664    3433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:56:50.513676    3433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:56:50.513701    3433 cni.go:84] Creating CNI manager for ""
	I0916 03:56:50.513725    3433 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 03:56:50.513774    3433 start.go:340] cluster config:
	{Name:ha-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-574000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:56:50.517992    3433 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:56:50.526537    3433 out.go:177] * Starting "ha-574000" primary control-plane node in "ha-574000" cluster
	I0916 03:56:50.530631    3433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:56:50.530647    3433 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:56:50.530665    3433 cache.go:56] Caching tarball of preloaded images
	I0916 03:56:50.530725    3433 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 03:56:50.530732    3433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:56:50.530800    3433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/ha-574000/config.json ...
	I0916 03:56:50.531233    3433 start.go:360] acquireMachinesLock for ha-574000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:56:50.531266    3433 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "ha-574000"
	I0916 03:56:50.531275    3433 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:56:50.531280    3433 fix.go:54] fixHost starting: 
	I0916 03:56:50.531393    3433 fix.go:112] recreateIfNeeded on ha-574000: state=Stopped err=<nil>
	W0916 03:56:50.531403    3433 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:56:50.535593    3433 out.go:177] * Restarting existing qemu2 VM for "ha-574000" ...
	I0916 03:56:50.542562    3433 qemu.go:418] Using hvf for hardware acceleration
	I0916 03:56:50.542607    3433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:89:83:5f:be:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/disk.qcow2
	I0916 03:56:50.544729    3433 main.go:141] libmachine: STDOUT: 
	I0916 03:56:50.544748    3433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 03:56:50.544775    3433 fix.go:56] duration metric: took 13.493875ms for fixHost
	I0916 03:56:50.544780    3433 start.go:83] releasing machines lock for "ha-574000", held for 13.50925ms
	W0916 03:56:50.544786    3433 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 03:56:50.544827    3433 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 03:56:50.544831    3433 start.go:729] Will try again in 5 seconds ...
	I0916 03:56:55.546833    3433 start.go:360] acquireMachinesLock for ha-574000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:56:55.547238    3433 start.go:364] duration metric: took 303.584µs to acquireMachinesLock for "ha-574000"
	I0916 03:56:55.547367    3433 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:56:55.547383    3433 fix.go:54] fixHost starting: 
	I0916 03:56:55.548084    3433 fix.go:112] recreateIfNeeded on ha-574000: state=Stopped err=<nil>
	W0916 03:56:55.548117    3433 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:56:55.553547    3433 out.go:177] * Restarting existing qemu2 VM for "ha-574000" ...
	I0916 03:56:55.561504    3433 qemu.go:418] Using hvf for hardware acceleration
	I0916 03:56:55.561710    3433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:89:83:5f:be:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/disk.qcow2
	I0916 03:56:55.570430    3433 main.go:141] libmachine: STDOUT: 
	I0916 03:56:55.570481    3433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 03:56:55.570540    3433 fix.go:56] duration metric: took 23.1585ms for fixHost
	I0916 03:56:55.570557    3433 start.go:83] releasing machines lock for "ha-574000", held for 23.296792ms
	W0916 03:56:55.570690    3433 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 03:56:55.576508    3433 out.go:201] 
	W0916 03:56:55.579458    3433 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 03:56:55.579481    3433 out.go:270] * 
	* 
	W0916 03:56:55.582088    3433 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:56:55.588482    3433 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-574000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-574000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (32.333041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.551459ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-574000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-574000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:56:55.727982    3446 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:56:55.728185    3446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:55.728189    3446 out.go:358] Setting ErrFile to fd 2...
	I0916 03:56:55.728191    3446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:55.728322    3446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:56:55.728535    3446 mustload.go:65] Loading cluster: ha-574000
	I0916 03:56:55.728762    3446 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 03:56:55.729076    3446 out.go:270] ! The control-plane node ha-574000 host is not running (will try others): state=Stopped
	! The control-plane node ha-574000 host is not running (will try others): state=Stopped
	W0916 03:56:55.729183    3446 out.go:270] ! The control-plane node ha-574000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-574000-m02 host is not running (will try others): state=Stopped
	I0916 03:56:55.733458    3446 out.go:177] * The control-plane node ha-574000-m03 host is not running: state=Stopped
	I0916 03:56:55.736444    3446 out.go:177]   To start a cluster, run: "minikube start -p ha-574000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-574000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr: exit status 7 (30.500125ms)

                                                
                                                
-- stdout --
	ha-574000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:56:55.768625    3448 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:56:55.768795    3448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:55.768798    3448 out.go:358] Setting ErrFile to fd 2...
	I0916 03:56:55.768801    3448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:56:55.768915    3448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:56:55.769038    3448 out.go:352] Setting JSON to false
	I0916 03:56:55.769052    3448 mustload.go:65] Loading cluster: ha-574000
	I0916 03:56:55.769108    3448 notify.go:220] Checking for updates...
	I0916 03:56:55.769331    3448 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:56:55.769337    3448 status.go:255] checking status of ha-574000 ...
	I0916 03:56:55.769572    3448 status.go:330] ha-574000 host status = "Stopped" (err=<nil>)
	I0916 03:56:55.769575    3448 status.go:343] host is not running, skipping remaining checks
	I0916 03:56:55.769577    3448 status.go:257] ha-574000 status: &{Name:ha-574000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:56:55.769587    3448 status.go:255] checking status of ha-574000-m02 ...
	I0916 03:56:55.769676    3448 status.go:330] ha-574000-m02 host status = "Stopped" (err=<nil>)
	I0916 03:56:55.769679    3448 status.go:343] host is not running, skipping remaining checks
	I0916 03:56:55.769680    3448 status.go:257] ha-574000-m02 status: &{Name:ha-574000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:56:55.769688    3448 status.go:255] checking status of ha-574000-m03 ...
	I0916 03:56:55.769777    3448 status.go:330] ha-574000-m03 host status = "Stopped" (err=<nil>)
	I0916 03:56:55.769780    3448 status.go:343] host is not running, skipping remaining checks
	I0916 03:56:55.769781    3448 status.go:257] ha-574000-m03 status: &{Name:ha-574000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:56:55.769785    3448 status.go:255] checking status of ha-574000-m04 ...
	I0916 03:56:55.769881    3448 status.go:330] ha-574000-m04 host status = "Stopped" (err=<nil>)
	I0916 03:56:55.769884    3448 status.go:343] host is not running, skipping remaining checks
	I0916 03:56:55.769885    3448 status.go:257] ha-574000-m04 status: &{Name:ha-574000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (29.687584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-574000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-574000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-574000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-574000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (54.294291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 stop -v=7 --alsologtostderr
E0916 03:57:57.095133    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:58:27.540427    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:59:20.172607    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-574000 stop -v=7 --alsologtostderr: (3m21.990309625s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr: exit status 7 (64.552584ms)

                                                
                                                
-- stdout --
	ha-574000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-574000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:00:18.891067    3825 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:00:18.891269    3825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:18.891273    3825 out.go:358] Setting ErrFile to fd 2...
	I0916 04:00:18.891276    3825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:18.891424    3825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:00:18.891581    3825 out.go:352] Setting JSON to false
	I0916 04:00:18.891591    3825 mustload.go:65] Loading cluster: ha-574000
	I0916 04:00:18.891633    3825 notify.go:220] Checking for updates...
	I0916 04:00:18.891910    3825 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:00:18.891921    3825 status.go:255] checking status of ha-574000 ...
	I0916 04:00:18.892205    3825 status.go:330] ha-574000 host status = "Stopped" (err=<nil>)
	I0916 04:00:18.892209    3825 status.go:343] host is not running, skipping remaining checks
	I0916 04:00:18.892212    3825 status.go:257] ha-574000 status: &{Name:ha-574000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 04:00:18.892224    3825 status.go:255] checking status of ha-574000-m02 ...
	I0916 04:00:18.892370    3825 status.go:330] ha-574000-m02 host status = "Stopped" (err=<nil>)
	I0916 04:00:18.892375    3825 status.go:343] host is not running, skipping remaining checks
	I0916 04:00:18.892378    3825 status.go:257] ha-574000-m02 status: &{Name:ha-574000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 04:00:18.892384    3825 status.go:255] checking status of ha-574000-m03 ...
	I0916 04:00:18.892507    3825 status.go:330] ha-574000-m03 host status = "Stopped" (err=<nil>)
	I0916 04:00:18.892512    3825 status.go:343] host is not running, skipping remaining checks
	I0916 04:00:18.892515    3825 status.go:257] ha-574000-m03 status: &{Name:ha-574000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 04:00:18.892520    3825 status.go:255] checking status of ha-574000-m04 ...
	I0916 04:00:18.892633    3825 status.go:330] ha-574000-m04 host status = "Stopped" (err=<nil>)
	I0916 04:00:18.892638    3825 status.go:343] host is not running, skipping remaining checks
	I0916 04:00:18.892640    3825 status.go:257] ha-574000-m04 status: &{Name:ha-574000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr": ha-574000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-574000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (32.248125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-574000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-574000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180680583s)

                                                
                                                
-- stdout --
	* [ha-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-574000" primary control-plane node in "ha-574000" cluster
	* Restarting existing qemu2 VM for "ha-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-574000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:00:18.954465    3829 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:00:18.954635    3829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:18.954638    3829 out.go:358] Setting ErrFile to fd 2...
	I0916 04:00:18.954641    3829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:18.954752    3829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:00:18.955720    3829 out.go:352] Setting JSON to false
	I0916 04:00:18.972398    3829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3581,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:00:18.972467    3829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:00:18.977658    3829 out.go:177] * [ha-574000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:00:18.984552    3829 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:00:18.984615    3829 notify.go:220] Checking for updates...
	I0916 04:00:18.992523    3829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:00:18.995523    3829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:00:18.998597    3829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:00:19.001449    3829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:00:19.004520    3829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:00:19.007959    3829 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:00:19.008238    3829 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:00:19.012511    3829 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:00:19.019549    3829 start.go:297] selected driver: qemu2
	I0916 04:00:19.019556    3829 start.go:901] validating driver "qemu2" against &{Name:ha-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-574000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:00:19.019637    3829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:00:19.022060    3829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:00:19.022082    3829 cni.go:84] Creating CNI manager for ""
	I0916 04:00:19.022102    3829 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 04:00:19.022148    3829 start.go:340] cluster config:
	{Name:ha-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-574000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:00:19.025979    3829 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:00:19.034576    3829 out.go:177] * Starting "ha-574000" primary control-plane node in "ha-574000" cluster
	I0916 04:00:19.038401    3829 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:00:19.038420    3829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:00:19.038431    3829 cache.go:56] Caching tarball of preloaded images
	I0916 04:00:19.038511    3829 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:00:19.038516    3829 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:00:19.038595    3829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/ha-574000/config.json ...
	I0916 04:00:19.039067    3829 start.go:360] acquireMachinesLock for ha-574000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:00:19.039100    3829 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "ha-574000"
	I0916 04:00:19.039111    3829 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:00:19.039117    3829 fix.go:54] fixHost starting: 
	I0916 04:00:19.039245    3829 fix.go:112] recreateIfNeeded on ha-574000: state=Stopped err=<nil>
	W0916 04:00:19.039254    3829 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:00:19.042587    3829 out.go:177] * Restarting existing qemu2 VM for "ha-574000" ...
	I0916 04:00:19.050495    3829 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:00:19.050536    3829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:89:83:5f:be:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/disk.qcow2
	I0916 04:00:19.052485    3829 main.go:141] libmachine: STDOUT: 
	I0916 04:00:19.052503    3829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:00:19.052533    3829 fix.go:56] duration metric: took 13.417167ms for fixHost
	I0916 04:00:19.052538    3829 start.go:83] releasing machines lock for "ha-574000", held for 13.4325ms
	W0916 04:00:19.052543    3829 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:00:19.052585    3829 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:00:19.052589    3829 start.go:729] Will try again in 5 seconds ...
	I0916 04:00:24.054614    3829 start.go:360] acquireMachinesLock for ha-574000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:00:24.055015    3829 start.go:364] duration metric: took 308.417µs to acquireMachinesLock for "ha-574000"
	I0916 04:00:24.055169    3829 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:00:24.055189    3829 fix.go:54] fixHost starting: 
	I0916 04:00:24.055879    3829 fix.go:112] recreateIfNeeded on ha-574000: state=Stopped err=<nil>
	W0916 04:00:24.055902    3829 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:00:24.060342    3829 out.go:177] * Restarting existing qemu2 VM for "ha-574000" ...
	I0916 04:00:24.064151    3829 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:00:24.064407    3829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:89:83:5f:be:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/ha-574000/disk.qcow2
	I0916 04:00:24.073126    3829 main.go:141] libmachine: STDOUT: 
	I0916 04:00:24.073180    3829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:00:24.073246    3829 fix.go:56] duration metric: took 18.0595ms for fixHost
	I0916 04:00:24.073268    3829 start.go:83] releasing machines lock for "ha-574000", held for 18.235208ms
	W0916 04:00:24.073441    3829 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-574000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:00:24.080272    3829 out.go:201] 
	W0916 04:00:24.083269    3829 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:00:24.083299    3829 out.go:270] * 
	* 
	W0916 04:00:24.086555    3829 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:00:24.094298    3829 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-574000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (67.864541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-574000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-574000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-574000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-574000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (29.639917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-574000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-574000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.887833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-574000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-574000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:00:24.284008    3847 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:00:24.284350    3847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:24.284354    3847 out.go:358] Setting ErrFile to fd 2...
	I0916 04:00:24.284356    3847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:00:24.284475    3847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:00:24.284689    3847 mustload.go:65] Loading cluster: ha-574000
	I0916 04:00:24.284932    3847 config.go:182] Loaded profile config "ha-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 04:00:24.285226    3847 out.go:270] ! The control-plane node ha-574000 host is not running (will try others): state=Stopped
	! The control-plane node ha-574000 host is not running (will try others): state=Stopped
	W0916 04:00:24.285323    3847 out.go:270] ! The control-plane node ha-574000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-574000-m02 host is not running (will try others): state=Stopped
	I0916 04:00:24.288265    3847 out.go:177] * The control-plane node ha-574000-m03 host is not running: state=Stopped
	I0916 04:00:24.292293    3847 out.go:177]   To start a cluster, run: "minikube start -p ha-574000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-574000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-574000 -n ha-574000: exit status 7 (29.611583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-250000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-250000 --driver=qemu2 : exit status 80 (10.311735041s)

                                                
                                                
-- stdout --
	* [image-250000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-250000" primary control-plane node in "image-250000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-250000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-250000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-250000 -n image-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-250000 -n image-250000: exit status 7 (68.353541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-579000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-579000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.9006475s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd4c276e-cbd1-4f8d-8db1-1d4833cbe2aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-579000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6022b097-4b70-4b62-8313-19c1cdd89bff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"b8a61b5c-9a09-439c-bc50-60ca63eea9f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig"}}
	{"specversion":"1.0","id":"ed17a073-a8d2-41b1-97aa-64f55e5a36c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c12317f2-085d-4bd7-ad88-1725a40f93fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12eb92f8-3519-4e0e-94e9-76fd501a7829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube"}}
	{"specversion":"1.0","id":"f52a0848-9be9-4ae9-967a-04e4c51d3ecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"da20795c-c539-451e-b5b6-7480d09f4487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"904d1d6d-774a-4939-943b-f8bb6567c9d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"141a5744-bf1f-48d8-9270-145f7b4cdea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-579000\" primary control-plane node in \"json-output-579000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"687f138f-2b63-4f90-839a-691598b031cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"99fc96f4-0548-4f5c-9fdc-f512df7d1b37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-579000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"535585f0-1eb2-4548-ac95-46f26d951e3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9645a458-c78d-402d-953a-1c35ad53e97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c17715c1-3664-4ee6-8ff6-3f9d73036774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-579000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"79ae4d76-a12c-45a6-aa50-72b2d89ad8c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d548efca-6253-441a-a38b-c0156b8ab093","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-579000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.90s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-579000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-579000 --output=json --user=testUser: exit status 83 (85.128292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d28917df-2df4-48ae-ac43-feefa30c2ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-579000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"f70be9de-e98b-4fff-8906-184b0080d978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-579000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-579000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-579000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-579000 --output=json --user=testUser: exit status 83 (50.275292ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-579000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-579000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-579000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-579000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-264000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-264000 --driver=qemu2 : exit status 80 (9.789619541s)

                                                
                                                
-- stdout --
	* [first-264000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-264000" primary control-plane node in "first-264000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-264000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-16 04:00:58.527071 -0700 PDT m=+2474.639480209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-265000 -n second-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-265000 -n second-265000: exit status 85 (82.986792ms)

                                                
                                                
-- stdout --
	* Profile "second-265000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-265000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-265000" host is not running, skipping log retrieval (state="* Profile \"second-265000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-265000\"")
helpers_test.go:175: Cleaning up "second-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-265000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-16 04:00:58.7191 -0700 PDT m=+2474.831516001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-264000 -n first-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-264000 -n first-264000: exit status 7 (30.33925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-264000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-264000
--- FAIL: TestMinikubeProfile (10.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.967828708s)

                                                
                                                
-- stdout --
	* [mount-start-1-075000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-075000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-075000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-075000 -n mount-start-1-075000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-075000 -n mount-start-1-075000: exit status 7 (67.888833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-075000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-990000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-990000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.894226542s)

                                                
                                                
-- stdout --
	* [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:01:09.082170    3990 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:01:09.082303    3990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:01:09.082306    3990 out.go:358] Setting ErrFile to fd 2...
	I0916 04:01:09.082309    3990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:01:09.082439    3990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:01:09.083463    3990 out.go:352] Setting JSON to false
	I0916 04:01:09.099507    3990 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3632,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:01:09.099576    3990 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:01:09.106251    3990 out.go:177] * [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:01:09.115134    3990 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:01:09.115181    3990 notify.go:220] Checking for updates...
	I0916 04:01:09.124271    3990 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:01:09.127199    3990 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:01:09.130205    3990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:01:09.133278    3990 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:01:09.134745    3990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:01:09.138329    3990 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:01:09.142196    3990 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:01:09.148233    3990 start.go:297] selected driver: qemu2
	I0916 04:01:09.148241    3990 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:01:09.148250    3990 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:01:09.150645    3990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:01:09.153189    3990 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:01:09.156328    3990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:01:09.156345    3990 cni.go:84] Creating CNI manager for ""
	I0916 04:01:09.156368    3990 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 04:01:09.156373    3990 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 04:01:09.156403    3990 start.go:340] cluster config:
	{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:01:09.160237    3990 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:01:09.168290    3990 out.go:177] * Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	I0916 04:01:09.172160    3990 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:01:09.172174    3990 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:01:09.172183    3990 cache.go:56] Caching tarball of preloaded images
	I0916 04:01:09.172246    3990 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:01:09.172253    3990 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:01:09.172481    3990 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/multinode-990000/config.json ...
	I0916 04:01:09.172498    3990 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/multinode-990000/config.json: {Name:mkbb0bc6d7064dc95834dc8dfec15eb9bca5f742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:01:09.172727    3990 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:01:09.172764    3990 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "multinode-990000"
	I0916 04:01:09.172776    3990 start.go:93] Provisioning new machine with config: &{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:01:09.172802    3990 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:01:09.179216    3990 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:01:09.197737    3990 start.go:159] libmachine.API.Create for "multinode-990000" (driver="qemu2")
	I0916 04:01:09.197763    3990 client.go:168] LocalClient.Create starting
	I0916 04:01:09.197830    3990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:01:09.197864    3990 main.go:141] libmachine: Decoding PEM data...
	I0916 04:01:09.197880    3990 main.go:141] libmachine: Parsing certificate...
	I0916 04:01:09.197918    3990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:01:09.197949    3990 main.go:141] libmachine: Decoding PEM data...
	I0916 04:01:09.197957    3990 main.go:141] libmachine: Parsing certificate...
	I0916 04:01:09.198418    3990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:01:09.359740    3990 main.go:141] libmachine: Creating SSH key...
	I0916 04:01:09.408914    3990 main.go:141] libmachine: Creating Disk image...
	I0916 04:01:09.408922    3990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:01:09.409092    3990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:09.418025    3990 main.go:141] libmachine: STDOUT: 
	I0916 04:01:09.418043    3990 main.go:141] libmachine: STDERR: 
	I0916 04:01:09.418101    3990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2 +20000M
	I0916 04:01:09.425918    3990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:01:09.425934    3990 main.go:141] libmachine: STDERR: 
	I0916 04:01:09.425949    3990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:09.425953    3990 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:01:09.425964    3990 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:01:09.425993    3990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:86:ad:af:22:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:09.427521    3990 main.go:141] libmachine: STDOUT: 
	I0916 04:01:09.427538    3990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:01:09.427556    3990 client.go:171] duration metric: took 229.795708ms to LocalClient.Create
	I0916 04:01:11.429654    3990 start.go:128] duration metric: took 2.256916792s to createHost
	I0916 04:01:11.429703    3990 start.go:83] releasing machines lock for "multinode-990000", held for 2.257017333s
	W0916 04:01:11.429772    3990 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:01:11.448946    3990 out.go:177] * Deleting "multinode-990000" in qemu2 ...
	W0916 04:01:11.487244    3990 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:01:11.487270    3990 start.go:729] Will try again in 5 seconds ...
	I0916 04:01:16.489248    3990 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:01:16.489683    3990 start.go:364] duration metric: took 347.792µs to acquireMachinesLock for "multinode-990000"
	I0916 04:01:16.489812    3990 start.go:93] Provisioning new machine with config: &{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:01:16.490118    3990 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:01:16.508939    3990 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:01:16.561079    3990 start.go:159] libmachine.API.Create for "multinode-990000" (driver="qemu2")
	I0916 04:01:16.561123    3990 client.go:168] LocalClient.Create starting
	I0916 04:01:16.561220    3990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:01:16.561285    3990 main.go:141] libmachine: Decoding PEM data...
	I0916 04:01:16.561306    3990 main.go:141] libmachine: Parsing certificate...
	I0916 04:01:16.561372    3990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:01:16.561416    3990 main.go:141] libmachine: Decoding PEM data...
	I0916 04:01:16.561429    3990 main.go:141] libmachine: Parsing certificate...
	I0916 04:01:16.562122    3990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:01:16.735721    3990 main.go:141] libmachine: Creating SSH key...
	I0916 04:01:16.873763    3990 main.go:141] libmachine: Creating Disk image...
	I0916 04:01:16.873770    3990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:01:16.873966    3990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:16.883577    3990 main.go:141] libmachine: STDOUT: 
	I0916 04:01:16.883599    3990 main.go:141] libmachine: STDERR: 
	I0916 04:01:16.883659    3990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2 +20000M
	I0916 04:01:16.891420    3990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:01:16.891440    3990 main.go:141] libmachine: STDERR: 
	I0916 04:01:16.891449    3990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:16.891453    3990 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:01:16.891460    3990 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:01:16.891483    3990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8c:38:57:18:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:01:16.893098    3990 main.go:141] libmachine: STDOUT: 
	I0916 04:01:16.893113    3990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:01:16.893126    3990 client.go:171] duration metric: took 332.009792ms to LocalClient.Create
	I0916 04:01:18.895297    3990 start.go:128] duration metric: took 2.405226083s to createHost
	I0916 04:01:18.895354    3990 start.go:83] releasing machines lock for "multinode-990000", held for 2.405741792s
	W0916 04:01:18.895687    3990 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:01:18.906504    3990 out.go:201] 
	W0916 04:01:18.918549    3990 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:01:18.918611    3990 out.go:270] * 
	* 
	W0916 04:01:18.921207    3990 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:01:18.932432    3990 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-990000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (66.03025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (114.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.602041ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-990000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- rollout status deployment/busybox: exit status 1 (59.032042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.759708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.650667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.040333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.893792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.193834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.327583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.73775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.867083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.28825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.345583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0916 04:02:57.182534    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.132041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.345958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.003166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.494625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.982042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.72925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (114.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-990000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.181417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (29.3815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-990000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-990000 -v 3 --alsologtostderr: exit status 83 (42.30975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-990000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-990000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:13.430051    4089 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:13.430219    4089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.430222    4089 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:13.430225    4089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.430346    4089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:13.430566    4089 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:13.430790    4089 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:13.434999    4089 out.go:177] * The control-plane node multinode-990000 host is not running: state=Stopped
	I0916 04:03:13.439985    4089 out.go:177]   To start a cluster, run: "minikube start -p multinode-990000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-990000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (29.566875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-990000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-990000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.323125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-990000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-990000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-990000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.450833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-990000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-990000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status --output json --alsologtostderr: exit status 7 (29.742625ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-990000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:13.640810    4101 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:13.640991    4101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.640994    4101 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:13.640996    4101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.641119    4101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:13.641242    4101 out.go:352] Setting JSON to true
	I0916 04:03:13.641250    4101 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:13.641321    4101 notify.go:220] Checking for updates...
	I0916 04:03:13.641447    4101 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:13.641453    4101 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:13.641695    4101 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:13.641698    4101 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:13.641700    4101 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-990000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.298459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 node stop m03: exit status 85 (47.319417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-990000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status: exit status 7 (30.818875ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr: exit status 7 (30.382ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:13.780422    4109 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:13.780588    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.780592    4109 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:13.780594    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.780720    4109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:13.780840    4109 out.go:352] Setting JSON to false
	I0916 04:03:13.780848    4109 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:13.780918    4109 notify.go:220] Checking for updates...
	I0916 04:03:13.781062    4109 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:13.781068    4109 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:13.781298    4109 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:13.781302    4109 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:13.781304    4109 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr": multinode-990000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.085959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.249459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:13.841018    4113 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:13.841240    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.841248    4113 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:13.841250    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.841377    4113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:13.841592    4113 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:13.841784    4113 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:13.845990    4113 out.go:201] 
	W0916 04:03:13.849009    4113 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0916 04:03:13.849015    4113 out.go:270] * 
	* 
	W0916 04:03:13.850759    4113 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:03:13.853959    4113 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0916 04:03:13.841018    4113 out.go:345] Setting OutFile to fd 1 ...
I0916 04:03:13.841240    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 04:03:13.841248    4113 out.go:358] Setting ErrFile to fd 2...
I0916 04:03:13.841250    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 04:03:13.841377    4113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 04:03:13.841592    4113 mustload.go:65] Loading cluster: multinode-990000
I0916 04:03:13.841784    4113 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 04:03:13.845990    4113 out.go:201] 
W0916 04:03:13.849009    4113 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0916 04:03:13.849015    4113 out.go:270] * 
* 
W0916 04:03:13.850759    4113 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 04:03:13.853959    4113 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-990000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (30.444375ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:13.887745    4115 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:13.887891    4115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.887894    4115 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:13.887897    4115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:13.888025    4115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:13.888154    4115 out.go:352] Setting JSON to false
	I0916 04:03:13.888163    4115 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:13.888221    4115 notify.go:220] Checking for updates...
	I0916 04:03:13.888376    4115 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:13.888382    4115 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:13.888628    4115 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:13.888631    4115 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:13.888633    4115 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (71.271209ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:14.558684    4117 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:14.558908    4117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:14.558912    4117 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:14.558915    4117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:14.559088    4117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:14.559251    4117 out.go:352] Setting JSON to false
	I0916 04:03:14.559262    4117 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:14.559302    4117 notify.go:220] Checking for updates...
	I0916 04:03:14.559552    4117 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:14.559560    4117 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:14.559868    4117 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:14.559873    4117 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:14.559875    4117 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (72.832417ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:16.767405    4119 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:16.767619    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:16.767624    4119 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:16.767627    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:16.767795    4119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:16.767941    4119 out.go:352] Setting JSON to false
	I0916 04:03:16.767952    4119 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:16.767992    4119 notify.go:220] Checking for updates...
	I0916 04:03:16.768229    4119 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:16.768237    4119 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:16.768548    4119 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:16.768554    4119 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:16.768556    4119 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (52.14ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:19.311346    4121 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:19.311522    4121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:19.311526    4121 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:19.311528    4121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:19.311667    4121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:19.311788    4121 out.go:352] Setting JSON to false
	I0916 04:03:19.311802    4121 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:19.311838    4121 notify.go:220] Checking for updates...
	I0916 04:03:19.312020    4121 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:19.312031    4121 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:19.312284    4121 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:19.312288    4121 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:19.312290    4121 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (71.498083ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:23.076735    4124 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:23.076975    4124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:23.076980    4124 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:23.076983    4124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:23.077164    4124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:23.077350    4124 out.go:352] Setting JSON to false
	I0916 04:03:23.077362    4124 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:23.077410    4124 notify.go:220] Checking for updates...
	I0916 04:03:23.077670    4124 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:23.077677    4124 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:23.078023    4124 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:23.078028    4124 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:23.078031    4124 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (73.300625ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:25.964231    4126 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:25.964419    4126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:25.964423    4126 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:25.964426    4126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:25.964585    4126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:25.964731    4126 out.go:352] Setting JSON to false
	I0916 04:03:25.964743    4126 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:25.964785    4126 notify.go:220] Checking for updates...
	I0916 04:03:25.965006    4126 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:25.965015    4126 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:25.965318    4126 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:25.965322    4126 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:25.965325    4126 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0916 04:03:27.627360    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (74.329916ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:33.357062    4128 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:33.357297    4128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:33.357303    4128 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:33.357307    4128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:33.357490    4128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:33.357667    4128 out.go:352] Setting JSON to false
	I0916 04:03:33.357679    4128 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:33.357726    4128 notify.go:220] Checking for updates...
	I0916 04:03:33.358009    4128 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:33.358025    4128 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:33.358345    4128 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:33.358351    4128 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:33.358353    4128 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (72.959834ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:42.594306    4130 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:42.594497    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:42.594501    4130 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:42.594505    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:42.594700    4130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:42.594850    4130 out.go:352] Setting JSON to false
	I0916 04:03:42.594861    4130 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:42.594907    4130 notify.go:220] Checking for updates...
	I0916 04:03:42.595157    4130 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:42.595164    4130 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:42.595490    4130 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:42.595495    4130 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:42.595498    4130 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr: exit status 7 (74.941208ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:58.631016    4138 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:58.631524    4138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:58.631531    4138 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:58.631534    4138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:58.631826    4138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:03:58.632279    4138 out.go:352] Setting JSON to false
	I0916 04:03:58.632304    4138 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:03:58.632391    4138 notify.go:220] Checking for updates...
	I0916 04:03:58.632779    4138 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:58.632788    4138 status.go:255] checking status of multinode-990000 ...
	I0916 04:03:58.633127    4138 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:03:58.633133    4138 status.go:343] host is not running, skipping remaining checks
	I0916 04:03:58.633136    4138 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-990000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (33.164208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (44.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-990000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-990000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-990000: (3.89677625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-990000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-990000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219299667s)

                                                
                                                
-- stdout --
	* [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	* Restarting existing qemu2 VM for "multinode-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:04:02.656184    4164 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:04:02.656330    4164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:02.656335    4164 out.go:358] Setting ErrFile to fd 2...
	I0916 04:04:02.656338    4164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:02.656507    4164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:04:02.657786    4164 out.go:352] Setting JSON to false
	I0916 04:04:02.676975    4164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3805,"bootTime":1726480837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:04:02.677039    4164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:04:02.681805    4164 out.go:177] * [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:04:02.688809    4164 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:04:02.688854    4164 notify.go:220] Checking for updates...
	I0916 04:04:02.695625    4164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:04:02.698712    4164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:04:02.701699    4164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:04:02.704687    4164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:04:02.707741    4164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:04:02.710967    4164 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:04:02.711028    4164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:04:02.714653    4164 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:04:02.721717    4164 start.go:297] selected driver: qemu2
	I0916 04:04:02.721725    4164 start.go:901] validating driver "qemu2" against &{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:04:02.721793    4164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:04:02.724184    4164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:04:02.724208    4164 cni.go:84] Creating CNI manager for ""
	I0916 04:04:02.724232    4164 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 04:04:02.724283    4164 start.go:340] cluster config:
	{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:04:02.727964    4164 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:02.735745    4164 out.go:177] * Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	I0916 04:04:02.739737    4164 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:04:02.739753    4164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:04:02.739770    4164 cache.go:56] Caching tarball of preloaded images
	I0916 04:04:02.739852    4164 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:04:02.739858    4164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:04:02.739917    4164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/multinode-990000/config.json ...
	I0916 04:04:02.740354    4164 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:02.740391    4164 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "multinode-990000"
	I0916 04:04:02.740402    4164 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:04:02.740406    4164 fix.go:54] fixHost starting: 
	I0916 04:04:02.740527    4164 fix.go:112] recreateIfNeeded on multinode-990000: state=Stopped err=<nil>
	W0916 04:04:02.740536    4164 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:04:02.743735    4164 out.go:177] * Restarting existing qemu2 VM for "multinode-990000" ...
	I0916 04:04:02.751707    4164 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:02.751749    4164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8c:38:57:18:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:04:02.753732    4164 main.go:141] libmachine: STDOUT: 
	I0916 04:04:02.753754    4164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:02.753788    4164 fix.go:56] duration metric: took 13.381375ms for fixHost
	I0916 04:04:02.753794    4164 start.go:83] releasing machines lock for "multinode-990000", held for 13.397ms
	W0916 04:04:02.753801    4164 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:04:02.753846    4164 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:02.753850    4164 start.go:729] Will try again in 5 seconds ...
	I0916 04:04:07.755917    4164 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:07.756371    4164 start.go:364] duration metric: took 366.541µs to acquireMachinesLock for "multinode-990000"
	I0916 04:04:07.756509    4164 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:04:07.756529    4164 fix.go:54] fixHost starting: 
	I0916 04:04:07.757221    4164 fix.go:112] recreateIfNeeded on multinode-990000: state=Stopped err=<nil>
	W0916 04:04:07.757253    4164 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:04:07.761719    4164 out.go:177] * Restarting existing qemu2 VM for "multinode-990000" ...
	I0916 04:04:07.768684    4164 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:07.768953    4164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8c:38:57:18:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:04:07.777630    4164 main.go:141] libmachine: STDOUT: 
	I0916 04:04:07.777709    4164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:07.777775    4164 fix.go:56] duration metric: took 21.244583ms for fixHost
	I0916 04:04:07.777795    4164 start.go:83] releasing machines lock for "multinode-990000", held for 21.403792ms
	W0916 04:04:07.777948    4164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:07.785563    4164 out.go:201] 
	W0916 04:04:07.789704    4164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:04:07.789775    4164 out.go:270] * 
	* 
	W0916 04:04:07.792605    4164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:04:07.800578    4164 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-990000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-990000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (32.844791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 node delete m03: exit status 83 (39.620209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-990000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-990000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-990000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr: exit status 7 (29.333833ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:04:07.984110    4178 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:04:07.984248    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:07.984251    4178 out.go:358] Setting ErrFile to fd 2...
	I0916 04:04:07.984254    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:07.984373    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:04:07.984492    4178 out.go:352] Setting JSON to false
	I0916 04:04:07.984501    4178 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:04:07.984592    4178 notify.go:220] Checking for updates...
	I0916 04:04:07.984727    4178 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:04:07.984735    4178 status.go:255] checking status of multinode-990000 ...
	I0916 04:04:07.984993    4178 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:04:07.984997    4178 status.go:343] host is not running, skipping remaining checks
	I0916 04:04:07.985000    4178 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.272083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-990000 stop: (2.953280791s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status: exit status 7 (66.860875ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr: exit status 7 (32.905333ms)

                                                
                                                
-- stdout --
	multinode-990000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:04:11.068084    4202 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:04:11.068223    4202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:11.068227    4202 out.go:358] Setting ErrFile to fd 2...
	I0916 04:04:11.068230    4202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:11.068367    4202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:04:11.068495    4202 out.go:352] Setting JSON to false
	I0916 04:04:11.068504    4202 mustload.go:65] Loading cluster: multinode-990000
	I0916 04:04:11.068572    4202 notify.go:220] Checking for updates...
	I0916 04:04:11.068722    4202 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:04:11.068728    4202 status.go:255] checking status of multinode-990000 ...
	I0916 04:04:11.068965    4202 status.go:330] multinode-990000 host status = "Stopped" (err=<nil>)
	I0916 04:04:11.068968    4202 status.go:343] host is not running, skipping remaining checks
	I0916 04:04:11.068970    4202 status.go:257] multinode-990000 status: &{Name:multinode-990000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr": multinode-990000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-990000 status --alsologtostderr": multinode-990000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.318583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-990000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-990000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181656333s)

                                                
                                                
-- stdout --
	* [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	* Restarting existing qemu2 VM for "multinode-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:04:11.128407    4206 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:04:11.128540    4206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:11.128544    4206 out.go:358] Setting ErrFile to fd 2...
	I0916 04:04:11.128546    4206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:11.128659    4206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:04:11.129627    4206 out.go:352] Setting JSON to false
	I0916 04:04:11.145408    4206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3814,"bootTime":1726480837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:04:11.145476    4206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:04:11.150705    4206 out.go:177] * [multinode-990000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:04:11.157594    4206 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:04:11.157646    4206 notify.go:220] Checking for updates...
	I0916 04:04:11.164609    4206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:04:11.167608    4206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:04:11.170602    4206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:04:11.173579    4206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:04:11.176570    4206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:04:11.179948    4206 config.go:182] Loaded profile config "multinode-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:04:11.180194    4206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:04:11.183542    4206 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:04:11.190618    4206 start.go:297] selected driver: qemu2
	I0916 04:04:11.190626    4206 start.go:901] validating driver "qemu2" against &{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:04:11.190702    4206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:04:11.192861    4206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:04:11.192884    4206 cni.go:84] Creating CNI manager for ""
	I0916 04:04:11.192907    4206 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 04:04:11.192949    4206 start.go:340] cluster config:
	{Name:multinode-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-990000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:04:11.196402    4206 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:11.203566    4206 out.go:177] * Starting "multinode-990000" primary control-plane node in "multinode-990000" cluster
	I0916 04:04:11.207614    4206 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:04:11.207630    4206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:04:11.207641    4206 cache.go:56] Caching tarball of preloaded images
	I0916 04:04:11.207699    4206 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:04:11.207705    4206 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:04:11.207767    4206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/multinode-990000/config.json ...
	I0916 04:04:11.208220    4206 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:11.208250    4206 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "multinode-990000"
	I0916 04:04:11.208260    4206 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:04:11.208266    4206 fix.go:54] fixHost starting: 
	I0916 04:04:11.208388    4206 fix.go:112] recreateIfNeeded on multinode-990000: state=Stopped err=<nil>
	W0916 04:04:11.208397    4206 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:04:11.212586    4206 out.go:177] * Restarting existing qemu2 VM for "multinode-990000" ...
	I0916 04:04:11.220515    4206 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:11.220555    4206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8c:38:57:18:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:04:11.222587    4206 main.go:141] libmachine: STDOUT: 
	I0916 04:04:11.222606    4206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:11.222640    4206 fix.go:56] duration metric: took 14.37525ms for fixHost
	I0916 04:04:11.222644    4206 start.go:83] releasing machines lock for "multinode-990000", held for 14.389458ms
	W0916 04:04:11.222650    4206 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:04:11.222695    4206 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:11.222700    4206 start.go:729] Will try again in 5 seconds ...
	I0916 04:04:16.224783    4206 start.go:360] acquireMachinesLock for multinode-990000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:16.225204    4206 start.go:364] duration metric: took 310.25µs to acquireMachinesLock for "multinode-990000"
	I0916 04:04:16.225317    4206 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:04:16.225334    4206 fix.go:54] fixHost starting: 
	I0916 04:04:16.226022    4206 fix.go:112] recreateIfNeeded on multinode-990000: state=Stopped err=<nil>
	W0916 04:04:16.226046    4206 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:04:16.230486    4206 out.go:177] * Restarting existing qemu2 VM for "multinode-990000" ...
	I0916 04:04:16.237389    4206 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:16.237718    4206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8c:38:57:18:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/multinode-990000/disk.qcow2
	I0916 04:04:16.246558    4206 main.go:141] libmachine: STDOUT: 
	I0916 04:04:16.246623    4206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:16.246682    4206 fix.go:56] duration metric: took 21.349958ms for fixHost
	I0916 04:04:16.246695    4206 start.go:83] releasing machines lock for "multinode-990000", held for 21.467917ms
	W0916 04:04:16.246843    4206 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:16.254375    4206 out.go:201] 
	W0916 04:04:16.258452    4206 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:04:16.258481    4206 out.go:270] * 
	* 
	W0916 04:04:16.261332    4206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:04:16.268435    4206 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-990000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (68.8265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-990000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-990000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-990000-m01 --driver=qemu2 : exit status 80 (9.802501041s)

                                                
                                                
-- stdout --
	* [multinode-990000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-990000-m01" primary control-plane node in "multinode-990000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-990000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-990000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-990000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-990000-m02 --driver=qemu2 : exit status 80 (9.94514575s)

                                                
                                                
-- stdout --
	* [multinode-990000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-990000-m02" primary control-plane node in "multinode-990000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-990000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-990000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-990000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-990000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-990000: exit status 83 (80.995375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-990000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-990000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-990000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-990000 -n multinode-990000: exit status 7 (30.344083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-990000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                    
x
+
TestPreload (9.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-804000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-804000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.82560875s)

                                                
                                                
-- stdout --
	* [test-preload-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-804000" primary control-plane node in "test-preload-804000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-804000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:04:36.464629    4258 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:04:36.464789    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:36.464792    4258 out.go:358] Setting ErrFile to fd 2...
	I0916 04:04:36.464795    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:04:36.464931    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:04:36.466007    4258 out.go:352] Setting JSON to false
	I0916 04:04:36.481877    4258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3839,"bootTime":1726480837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:04:36.481958    4258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:04:36.488774    4258 out.go:177] * [test-preload-804000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:04:36.497630    4258 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:04:36.497689    4258 notify.go:220] Checking for updates...
	I0916 04:04:36.505615    4258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:04:36.508679    4258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:04:36.511632    4258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:04:36.514651    4258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:04:36.517573    4258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:04:36.520968    4258 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:04:36.521017    4258 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:04:36.525610    4258 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:04:36.532661    4258 start.go:297] selected driver: qemu2
	I0916 04:04:36.532668    4258 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:04:36.532676    4258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:04:36.535087    4258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:04:36.537557    4258 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:04:36.540682    4258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:04:36.540699    4258 cni.go:84] Creating CNI manager for ""
	I0916 04:04:36.540722    4258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:04:36.540726    4258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:04:36.540756    4258 start.go:340] cluster config:
	{Name:test-preload-804000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:04:36.544572    4258 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.552635    4258 out.go:177] * Starting "test-preload-804000" primary control-plane node in "test-preload-804000" cluster
	I0916 04:04:36.556629    4258 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0916 04:04:36.556744    4258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/test-preload-804000/config.json ...
	I0916 04:04:36.556761    4258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/test-preload-804000/config.json: {Name:mk79bf391b6d0f134a1a7d5c6fc3795b923bf6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:04:36.556751    4258 cache.go:107] acquiring lock: {Name:mk757e29d8fcbb1c2f9b7cb7704e295731e3b58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.556750    4258 cache.go:107] acquiring lock: {Name:mka8831118c6e2731aba875e0895aa259e84313b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.556777    4258 cache.go:107] acquiring lock: {Name:mk823c9cbca557f7625f2058a5406433a8f30324 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.556918    4258 cache.go:107] acquiring lock: {Name:mkf1ca10527d847a370bc88a1a7bdad34513dfa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.556995    4258 cache.go:107] acquiring lock: {Name:mk70a8dc88734d0993a6325d8b5650e9b357c601 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.556995    4258 cache.go:107] acquiring lock: {Name:mk15ea1b3055ed22c01e2fff1633367be96b4ba1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.557013    4258 cache.go:107] acquiring lock: {Name:mk6eed11cc90a78a47c254b04629060a4e2ee7d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.557041    4258 cache.go:107] acquiring lock: {Name:mkcacb2c9167599d8dd03e82fc4f84af5718aad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:04:36.557050    4258 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 04:04:36.557051    4258 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:04:36.557087    4258 start.go:360] acquireMachinesLock for test-preload-804000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:36.557134    4258 start.go:364] duration metric: took 34.292µs to acquireMachinesLock for "test-preload-804000"
	I0916 04:04:36.557205    4258 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 04:04:36.557146    4258 start.go:93] Provisioning new machine with config: &{Name:test-preload-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:04:36.557054    4258 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 04:04:36.557246    4258 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:04:36.557289    4258 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:04:36.557293    4258 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 04:04:36.557378    4258 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:04:36.557440    4258 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 04:04:36.564605    4258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:04:36.567612    4258 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 04:04:36.569337    4258 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 04:04:36.569422    4258 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:04:36.569851    4258 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:04:36.570724    4258 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 04:04:36.571449    4258 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 04:04:36.571659    4258 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:04:36.572010    4258 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 04:04:36.583405    4258 start.go:159] libmachine.API.Create for "test-preload-804000" (driver="qemu2")
	I0916 04:04:36.583427    4258 client.go:168] LocalClient.Create starting
	I0916 04:04:36.583510    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:04:36.583542    4258 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:36.583554    4258 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:36.583597    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:04:36.583628    4258 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:36.583636    4258 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:36.583987    4258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:04:36.745263    4258 main.go:141] libmachine: Creating SSH key...
	I0916 04:04:36.847544    4258 main.go:141] libmachine: Creating Disk image...
	I0916 04:04:36.847568    4258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:04:36.847739    4258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:36.857128    4258 main.go:141] libmachine: STDOUT: 
	I0916 04:04:36.857162    4258 main.go:141] libmachine: STDERR: 
	I0916 04:04:36.857228    4258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2 +20000M
	I0916 04:04:36.866682    4258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:04:36.866701    4258 main.go:141] libmachine: STDERR: 
	I0916 04:04:36.866725    4258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:36.866729    4258 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:04:36.866742    4258 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:36.866768    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:9c:44:dc:bd:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:36.868579    4258 main.go:141] libmachine: STDOUT: 
	I0916 04:04:36.868595    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:36.868615    4258 client.go:171] duration metric: took 285.187375ms to LocalClient.Create
	I0916 04:04:37.109205    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0916 04:04:37.128067    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 04:04:37.145061    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0916 04:04:37.159624    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0916 04:04:37.183785    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0916 04:04:37.190961    4258 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 04:04:37.190989    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 04:04:37.235100    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0916 04:04:37.286138    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0916 04:04:37.286190    4258 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 729.322292ms
	I0916 04:04:37.286221    4258 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0916 04:04:37.584482    4258 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 04:04:37.584600    4258 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 04:04:38.058984    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 04:04:38.059085    4258 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.502360625s
	I0916 04:04:38.059114    4258 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 04:04:38.868904    4258 start.go:128] duration metric: took 2.3116485s to createHost
	I0916 04:04:38.868967    4258 start.go:83] releasing machines lock for "test-preload-804000", held for 2.311867791s
	W0916 04:04:38.869015    4258 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:38.881074    4258 out.go:177] * Deleting "test-preload-804000" in qemu2 ...
	W0916 04:04:38.913736    4258 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:38.913760    4258 start.go:729] Will try again in 5 seconds ...
	I0916 04:04:39.764932    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0916 04:04:39.764980    4258 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.20810375s
	I0916 04:04:39.765022    4258 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0916 04:04:40.083536    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0916 04:04:40.083605    4258 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.526705s
	I0916 04:04:40.083636    4258 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0916 04:04:40.886655    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0916 04:04:40.886705    4258 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.330034167s
	I0916 04:04:40.886729    4258 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0916 04:04:41.106542    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0916 04:04:41.106602    4258 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.549949458s
	I0916 04:04:41.106626    4258 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0916 04:04:41.869490    4258 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0916 04:04:41.869537    4258 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.31259875s
	I0916 04:04:41.869561    4258 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0916 04:04:43.913861    4258 start.go:360] acquireMachinesLock for test-preload-804000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:43.914292    4258 start.go:364] duration metric: took 354.208µs to acquireMachinesLock for "test-preload-804000"
	I0916 04:04:43.914412    4258 start.go:93] Provisioning new machine with config: &{Name:test-preload-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:04:43.914623    4258 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:04:43.920114    4258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:04:43.971916    4258 start.go:159] libmachine.API.Create for "test-preload-804000" (driver="qemu2")
	I0916 04:04:43.971980    4258 client.go:168] LocalClient.Create starting
	I0916 04:04:43.972084    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:04:43.972148    4258 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:43.972169    4258 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:43.972233    4258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:04:43.972278    4258 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:43.972295    4258 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:43.972830    4258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:04:44.143623    4258 main.go:141] libmachine: Creating SSH key...
	I0916 04:04:44.184079    4258 main.go:141] libmachine: Creating Disk image...
	I0916 04:04:44.184085    4258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:04:44.184277    4258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:44.193676    4258 main.go:141] libmachine: STDOUT: 
	I0916 04:04:44.193693    4258 main.go:141] libmachine: STDERR: 
	I0916 04:04:44.193763    4258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2 +20000M
	I0916 04:04:44.201767    4258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:04:44.201797    4258 main.go:141] libmachine: STDERR: 
	I0916 04:04:44.201808    4258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:44.201812    4258 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:04:44.201822    4258 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:04:44.201861    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c0:a0:35:31:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/test-preload-804000/disk.qcow2
	I0916 04:04:44.203534    4258 main.go:141] libmachine: STDOUT: 
	I0916 04:04:44.203556    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:04:44.203568    4258 client.go:171] duration metric: took 231.587208ms to LocalClient.Create
	I0916 04:04:46.203708    4258 start.go:128] duration metric: took 2.289095083s to createHost
	I0916 04:04:46.203772    4258 start.go:83] releasing machines lock for "test-preload-804000", held for 2.289497625s
	W0916 04:04:46.204033    4258 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:04:46.220641    4258 out.go:201] 
	W0916 04:04:46.226163    4258 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:04:46.226200    4258 out.go:270] * 
	* 
	W0916 04:04:46.228578    4258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:04:46.245667    4258 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-804000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-16 04:04:46.264268 -0700 PDT m=+2702.285478959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-804000 -n test-preload-804000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-804000 -n test-preload-804000: exit status 7 (67.071458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-804000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-804000
--- FAIL: TestPreload (9.98s)

                                                
                                    
x
+
TestScheduledStopUnix (10.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-402000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-402000 --memory=2048 --driver=qemu2 : exit status 80 (9.984589334s)

                                                
                                                
-- stdout --
	* [scheduled-stop-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-402000" primary control-plane node in "scheduled-stop-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-402000" primary control-plane node in "scheduled-stop-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-16 04:04:56.397533 -0700 PDT m=+2712.418944251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-402000 -n scheduled-stop-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-402000 -n scheduled-stop-402000: exit status 7 (72.336375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-402000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-402000
--- FAIL: TestScheduledStopUnix (10.14s)

                                                
                                    
x
+
TestSkaffold (12.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe4226741449 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe4226741449 version: (1.066743334s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-704000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-704000 --memory=2600 --driver=qemu2 : exit status 80 (9.878973416s)

                                                
                                                
-- stdout --
	* [skaffold-704000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-704000" primary control-plane node in "skaffold-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-704000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-704000" primary control-plane node in "skaffold-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-16 04:05:09.193233 -0700 PDT m=+2725.214897334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-704000 -n skaffold-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-704000 -n skaffold-704000: exit status 7 (61.576833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-704000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-704000
--- FAIL: TestSkaffold (12.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (594.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.867117415 start -p running-upgrade-588000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.867117415 start -p running-upgrade-588000 --memory=2200 --vm-driver=qemu2 : (54.866280583s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-588000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0916 04:07:57.175539    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:08:27.621607    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-588000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.449825667s)

                                                
                                                
-- stdout --
	* [running-upgrade-588000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-588000" primary control-plane node in "running-upgrade-588000" cluster
	* Updating the running qemu2 "running-upgrade-588000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:06:48.360969    4655 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:06:48.361115    4655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:06:48.361119    4655 out.go:358] Setting ErrFile to fd 2...
	I0916 04:06:48.361121    4655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:06:48.361226    4655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:06:48.362203    4655 out.go:352] Setting JSON to false
	I0916 04:06:48.378141    4655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3971,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:06:48.378217    4655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:06:48.382513    4655 out.go:177] * [running-upgrade-588000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:06:48.389439    4655 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:06:48.389472    4655 notify.go:220] Checking for updates...
	I0916 04:06:48.396420    4655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:06:48.403492    4655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:06:48.412427    4655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:06:48.415428    4655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:06:48.418454    4655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:06:48.421685    4655 config.go:182] Loaded profile config "running-upgrade-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:06:48.425450    4655 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 04:06:48.428383    4655 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:06:48.432484    4655 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:06:48.439386    4655 start.go:297] selected driver: qemu2
	I0916 04:06:48.439391    4655 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50297 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:06:48.439443    4655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:06:48.441655    4655 cni.go:84] Creating CNI manager for ""
	I0916 04:06:48.441683    4655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:06:48.441706    4655 start.go:340] cluster config:
	{Name:running-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50297 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:06:48.441764    4655 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:06:48.449500    4655 out.go:177] * Starting "running-upgrade-588000" primary control-plane node in "running-upgrade-588000" cluster
	I0916 04:06:48.453420    4655 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:06:48.453441    4655 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 04:06:48.453451    4655 cache.go:56] Caching tarball of preloaded images
	I0916 04:06:48.453516    4655 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:06:48.453522    4655 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 04:06:48.453581    4655 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/config.json ...
	I0916 04:06:48.454062    4655 start.go:360] acquireMachinesLock for running-upgrade-588000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:06:48.454090    4655 start.go:364] duration metric: took 22µs to acquireMachinesLock for "running-upgrade-588000"
	I0916 04:06:48.454098    4655 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:06:48.454104    4655 fix.go:54] fixHost starting: 
	I0916 04:06:48.454720    4655 fix.go:112] recreateIfNeeded on running-upgrade-588000: state=Running err=<nil>
	W0916 04:06:48.454727    4655 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:06:48.458385    4655 out.go:177] * Updating the running qemu2 "running-upgrade-588000" VM ...
	I0916 04:06:48.465326    4655 machine.go:93] provisionDockerMachine start ...
	I0916 04:06:48.465373    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:48.465476    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:48.465482    4655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 04:06:48.518118    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-588000
	
	I0916 04:06:48.518136    4655 buildroot.go:166] provisioning hostname "running-upgrade-588000"
	I0916 04:06:48.518190    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:48.518302    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:48.518308    4655 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-588000 && echo "running-upgrade-588000" | sudo tee /etc/hostname
	I0916 04:06:48.573547    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-588000
	
	I0916 04:06:48.573610    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:48.573736    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:48.573744    4655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-588000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-588000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-588000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 04:06:48.627899    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 04:06:48.627911    4655 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1133/.minikube}
	I0916 04:06:48.627925    4655 buildroot.go:174] setting up certificates
	I0916 04:06:48.627933    4655 provision.go:84] configureAuth start
	I0916 04:06:48.627938    4655 provision.go:143] copyHostCerts
	I0916 04:06:48.628005    4655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem, removing ...
	I0916 04:06:48.628013    4655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem
	I0916 04:06:48.628144    4655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem (1078 bytes)
	I0916 04:06:48.628316    4655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem, removing ...
	I0916 04:06:48.628320    4655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem
	I0916 04:06:48.628365    4655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem (1123 bytes)
	I0916 04:06:48.628471    4655 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem, removing ...
	I0916 04:06:48.628473    4655 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem
	I0916 04:06:48.628520    4655 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem (1675 bytes)
	I0916 04:06:48.628602    4655 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-588000 san=[127.0.0.1 localhost minikube running-upgrade-588000]
	I0916 04:06:48.919739    4655 provision.go:177] copyRemoteCerts
	I0916 04:06:48.919790    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 04:06:48.919801    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:06:48.949651    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 04:06:48.957155    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 04:06:48.964089    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 04:06:48.972051    4655 provision.go:87] duration metric: took 344.112958ms to configureAuth
	I0916 04:06:48.972062    4655 buildroot.go:189] setting minikube options for container-runtime
	I0916 04:06:48.972202    4655 config.go:182] Loaded profile config "running-upgrade-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:06:48.972255    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:48.972350    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:48.972355    4655 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 04:06:49.026770    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 04:06:49.026787    4655 buildroot.go:70] root file system type: tmpfs
	I0916 04:06:49.026836    4655 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 04:06:49.026898    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:49.027012    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:49.027046    4655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 04:06:49.083476    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 04:06:49.083543    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:49.083660    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:49.083668    4655 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 04:06:49.137629    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 04:06:49.137642    4655 machine.go:96] duration metric: took 672.323083ms to provisionDockerMachine
	I0916 04:06:49.137648    4655 start.go:293] postStartSetup for "running-upgrade-588000" (driver="qemu2")
	I0916 04:06:49.137654    4655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 04:06:49.137720    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 04:06:49.137729    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:06:49.165972    4655 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 04:06:49.167373    4655 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 04:06:49.167381    4655 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/addons for local assets ...
	I0916 04:06:49.167461    4655 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/files for local assets ...
	I0916 04:06:49.167567    4655 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0916 04:06:49.167666    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 04:06:49.170100    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:06:49.177239    4655 start.go:296] duration metric: took 39.586875ms for postStartSetup
	I0916 04:06:49.177254    4655 fix.go:56] duration metric: took 723.166458ms for fixHost
	I0916 04:06:49.177295    4655 main.go:141] libmachine: Using SSH client type: native
	I0916 04:06:49.177402    4655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10257d190] 0x10257f9d0 <nil>  [] 0s} localhost 50265 <nil> <nil>}
	I0916 04:06:49.177407    4655 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 04:06:49.227858    4655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484809.700906305
	
	I0916 04:06:49.227868    4655 fix.go:216] guest clock: 1726484809.700906305
	I0916 04:06:49.227872    4655 fix.go:229] Guest: 2024-09-16 04:06:49.700906305 -0700 PDT Remote: 2024-09-16 04:06:49.177257 -0700 PDT m=+0.837251501 (delta=523.649305ms)
	I0916 04:06:49.227887    4655 fix.go:200] guest clock delta is within tolerance: 523.649305ms
	I0916 04:06:49.227891    4655 start.go:83] releasing machines lock for "running-upgrade-588000", held for 773.812292ms
	I0916 04:06:49.227965    4655 ssh_runner.go:195] Run: cat /version.json
	I0916 04:06:49.227974    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:06:49.227976    4655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 04:06:49.228004    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	W0916 04:06:49.228550    4655 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50265: connect: connection refused
	I0916 04:06:49.228568    4655 retry.go:31] will retry after 226.87064ms: dial tcp [::1]:50265: connect: connection refused
	W0916 04:06:49.254357    4655 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 04:06:49.254410    4655 ssh_runner.go:195] Run: systemctl --version
	I0916 04:06:49.256265    4655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 04:06:49.257892    4655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 04:06:49.257923    4655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 04:06:49.261411    4655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 04:06:49.265679    4655 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 04:06:49.265691    4655 start.go:495] detecting cgroup driver to use...
	I0916 04:06:49.265753    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:06:49.271374    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 04:06:49.274214    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 04:06:49.277568    4655 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 04:06:49.277608    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 04:06:49.280943    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:06:49.283936    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 04:06:49.286601    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:06:49.289864    4655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 04:06:49.292998    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 04:06:49.296655    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 04:06:49.299840    4655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 04:06:49.302760    4655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 04:06:49.307012    4655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 04:06:49.309957    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:49.400333    4655 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 04:06:49.410988    4655 start.go:495] detecting cgroup driver to use...
	I0916 04:06:49.411072    4655 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 04:06:49.417108    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:06:49.421977    4655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 04:06:49.427899    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:06:49.432497    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 04:06:49.437069    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:06:49.442204    4655 ssh_runner.go:195] Run: which cri-dockerd
	I0916 04:06:49.444011    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 04:06:49.446813    4655 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 04:06:49.451624    4655 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 04:06:49.541323    4655 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 04:06:49.648648    4655 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 04:06:49.648740    4655 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 04:06:49.655633    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:49.756307    4655 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:06:51.371613    4655 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.615321125s)
	I0916 04:06:51.371689    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 04:06:51.376751    4655 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 04:06:51.384253    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:06:51.388892    4655 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 04:06:51.479309    4655 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 04:06:51.562757    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:51.626665    4655 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 04:06:51.633422    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:06:51.638231    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:51.697218    4655 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 04:06:51.735483    4655 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 04:06:51.735583    4655 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 04:06:51.738681    4655 start.go:563] Will wait 60s for crictl version
	I0916 04:06:51.738748    4655 ssh_runner.go:195] Run: which crictl
	I0916 04:06:51.740014    4655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 04:06:51.751273    4655 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 04:06:51.751355    4655 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:06:51.763519    4655 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:06:51.783024    4655 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 04:06:51.783158    4655 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 04:06:51.784439    4655 kubeadm.go:883] updating cluster {Name:running-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50297 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 04:06:51.784480    4655 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:06:51.784527    4655 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:06:51.794207    4655 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:06:51.794215    4655 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:06:51.794270    4655 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:06:51.797699    4655 ssh_runner.go:195] Run: which lz4
	I0916 04:06:51.799068    4655 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 04:06:51.800282    4655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 04:06:51.800291    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 04:06:52.725938    4655 docker.go:649] duration metric: took 926.928375ms to copy over tarball
	I0916 04:06:52.726019    4655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 04:06:53.834603    4655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.108591625s)
	I0916 04:06:53.834617    4655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 04:06:53.850334    4655 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:06:53.853571    4655 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 04:06:53.858681    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:53.921266    4655 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:06:54.226381    4655 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:06:54.240525    4655 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:06:54.240533    4655 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:06:54.240538    4655 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 04:06:54.244378    4655 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:06:54.246986    4655 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:06:54.249853    4655 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:06:54.249929    4655 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:06:54.252532    4655 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:06:54.252583    4655 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:06:54.254413    4655 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:06:54.254488    4655 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:06:54.256024    4655 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:06:54.256091    4655 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 04:06:54.257532    4655 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:06:54.258088    4655 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:06:54.259093    4655 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:06:54.259752    4655 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 04:06:54.260399    4655 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:06:54.261817    4655 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:06:54.627643    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:06:54.640025    4655 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 04:06:54.640055    4655 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:06:54.640122    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:06:54.652764    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 04:06:54.664153    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:06:54.674217    4655 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 04:06:54.674236    4655 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:06:54.674295    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:06:54.684200    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 04:06:54.685642    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:06:54.685935    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 04:06:54.688867    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0916 04:06:54.710538    4655 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 04:06:54.710684    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:06:54.722018    4655 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 04:06:54.722039    4655 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:06:54.722088    4655 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 04:06:54.722103    4655 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:06:54.722139    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:06:54.722141    4655 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 04:06:54.722104    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 04:06:54.722154    4655 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 04:06:54.722193    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 04:06:54.724106    4655 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 04:06:54.724119    4655 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:06:54.724159    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:06:54.751414    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 04:06:54.751429    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 04:06:54.751498    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0916 04:06:54.751548    4655 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 04:06:54.752047    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 04:06:54.752120    4655 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:06:54.753159    4655 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 04:06:54.753171    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 04:06:54.755106    4655 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 04:06:54.755116    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 04:06:54.758896    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:06:54.769832    4655 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 04:06:54.769845    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 04:06:54.786721    4655 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 04:06:54.786745    4655 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:06:54.786815    4655 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:06:54.841814    4655 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0916 04:06:54.841834    4655 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:06:54.841840    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 04:06:54.841865    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 04:06:54.879707    4655 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 04:06:55.084817    4655 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 04:06:55.085078    4655 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:06:55.114738    4655 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 04:06:55.114772    4655 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:06:55.114861    4655 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:06:56.562782    4655 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.447918917s)
	I0916 04:06:56.562815    4655 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 04:06:56.563245    4655 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:06:56.568688    4655 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 04:06:56.568730    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 04:06:56.671040    4655 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:06:56.671058    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 04:06:57.254170    4655 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 04:06:57.254202    4655 cache_images.go:92] duration metric: took 3.013713125s to LoadCachedImages
	W0916 04:06:57.254239    4655 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0916 04:06:57.254246    4655 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 04:06:57.254302    4655 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-588000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 04:06:57.254393    4655 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 04:06:57.281411    4655 cni.go:84] Creating CNI manager for ""
	I0916 04:06:57.281426    4655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:06:57.281434    4655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 04:06:57.281443    4655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-588000 NodeName:running-upgrade-588000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 04:06:57.281528    4655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-588000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 04:06:57.281594    4655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 04:06:57.285040    4655 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 04:06:57.285079    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 04:06:57.287816    4655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 04:06:57.292513    4655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 04:06:57.297041    4655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 04:06:57.301783    4655 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 04:06:57.303198    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:06:57.386053    4655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:06:57.391720    4655 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000 for IP: 10.0.2.15
	I0916 04:06:57.391728    4655 certs.go:194] generating shared ca certs ...
	I0916 04:06:57.391736    4655 certs.go:226] acquiring lock for ca certs: {Name:mk7bbdd60870074cef3b6b7f58dae6ae1dc0ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:06:57.391880    4655 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key
	I0916 04:06:57.391913    4655 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key
	I0916 04:06:57.391921    4655 certs.go:256] generating profile certs ...
	I0916 04:06:57.391984    4655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.key
	I0916 04:06:57.392004    4655 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key.64cdf8d5
	I0916 04:06:57.392016    4655 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt.64cdf8d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 04:06:57.564245    4655 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt.64cdf8d5 ...
	I0916 04:06:57.564261    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt.64cdf8d5: {Name:mk88b73d2f49b65d24dbc08fe5f0410768790cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:06:57.565796    4655 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key.64cdf8d5 ...
	I0916 04:06:57.565802    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key.64cdf8d5: {Name:mk526320466c9a3615e0ee238f83e6e03c47b29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:06:57.565942    4655 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt.64cdf8d5 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt
	I0916 04:06:57.566096    4655 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key.64cdf8d5 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key
	I0916 04:06:57.566250    4655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/proxy-client.key
	I0916 04:06:57.566400    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem (1338 bytes)
	W0916 04:06:57.566422    4655 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0916 04:06:57.566427    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 04:06:57.566454    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem (1078 bytes)
	I0916 04:06:57.566476    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem (1123 bytes)
	I0916 04:06:57.566495    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem (1675 bytes)
	I0916 04:06:57.566542    4655 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:06:57.566906    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 04:06:57.574630    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 04:06:57.581552    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 04:06:57.588284    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 04:06:57.595050    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 04:06:57.602629    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 04:06:57.621286    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 04:06:57.632940    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 04:06:57.644314    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 04:06:57.650629    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0916 04:06:57.661667    4655 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0916 04:06:57.668150    4655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 04:06:57.672796    4655 ssh_runner.go:195] Run: openssl version
	I0916 04:06:57.674675    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 04:06:57.677819    4655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:06:57.679481    4655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:06:57.679507    4655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:06:57.681447    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 04:06:57.684202    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0916 04:06:57.687327    4655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0916 04:06:57.688727    4655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:35 /usr/share/ca-certificates/1652.pem
	I0916 04:06:57.688755    4655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0916 04:06:57.690497    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0916 04:06:57.693148    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0916 04:06:57.696252    4655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0916 04:06:57.697900    4655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:35 /usr/share/ca-certificates/16522.pem
	I0916 04:06:57.697923    4655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0916 04:06:57.699646    4655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 04:06:57.702330    4655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 04:06:57.703905    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 04:06:57.705687    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 04:06:57.707556    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 04:06:57.709289    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 04:06:57.711425    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 04:06:57.713132    4655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 04:06:57.715043    4655 kubeadm.go:392] StartCluster: {Name:running-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50297 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:06:57.715122    4655 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:06:57.733580    4655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 04:06:57.737270    4655 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 04:06:57.737280    4655 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 04:06:57.737314    4655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 04:06:57.740360    4655 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:06:57.740612    4655 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-588000" does not appear in /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:06:57.740665    4655 kubeconfig.go:62] /Users/jenkins/minikube-integration/19651-1133/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-588000" cluster setting kubeconfig missing "running-upgrade-588000" context setting]
	I0916 04:06:57.740815    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:06:57.741495    4655 kapi.go:59] client config for running-upgrade-588000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b55800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:06:57.741823    4655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 04:06:57.744746    4655 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-588000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 04:06:57.744755    4655 kubeadm.go:1160] stopping kube-system containers ...
	I0916 04:06:57.744810    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:06:57.757225    4655 docker.go:483] Stopping containers: [cc2fad651b22 c62901b6049b 29a0ea0245af f66ae9c4ecc7 f96872a76692 b09657ec1b89 86a9d3d5cf3b 097738ff3821 11b972b52433 ec701ef5863e 40ddd1363a95 9afebac229c8 de05511d0b22 26c8d8f36670 d92e5ad3e2d6 44aacfafc7e2 591d0be7d493]
	I0916 04:06:57.757308    4655 ssh_runner.go:195] Run: docker stop cc2fad651b22 c62901b6049b 29a0ea0245af f66ae9c4ecc7 f96872a76692 b09657ec1b89 86a9d3d5cf3b 097738ff3821 11b972b52433 ec701ef5863e 40ddd1363a95 9afebac229c8 de05511d0b22 26c8d8f36670 d92e5ad3e2d6 44aacfafc7e2 591d0be7d493
	I0916 04:06:58.252624    4655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 04:06:58.336125    4655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:06:58.342943    4655 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 16 11:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 16 11:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 16 11:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 16 11:06 /etc/kubernetes/scheduler.conf
	
	I0916 04:06:58.342984    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf
	I0916 04:06:58.353160    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:06:58.353196    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:06:58.356038    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf
	I0916 04:06:58.358951    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:06:58.358983    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:06:58.362059    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf
	I0916 04:06:58.364981    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:06:58.365010    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:06:58.368153    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf
	I0916 04:06:58.370876    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:06:58.370916    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:06:58.373550    4655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:06:58.379924    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:06:58.403116    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:06:58.921633    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:06:59.100001    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:06:59.130233    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:06:59.151993    4655 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:06:59.152078    4655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:06:59.654174    4655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:07:00.154162    4655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:07:00.159040    4655 api_server.go:72] duration metric: took 1.007069s to wait for apiserver process to appear ...
	I0916 04:07:00.159051    4655 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:07:00.159061    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:05.161173    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:05.161288    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:10.162108    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:10.162202    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:15.163333    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:15.163433    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:20.164913    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:20.165003    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:25.167612    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:25.167719    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:30.170327    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:30.170418    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:35.172468    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:35.172566    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:40.175285    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:40.175371    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:45.176650    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:45.176747    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:50.179413    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:50.179504    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:07:55.182145    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:07:55.182199    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:00.184550    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:00.184673    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:00.195839    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:00.195930    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:00.206385    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:00.206475    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:00.221685    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:00.221992    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:00.236467    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:00.236563    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:00.246600    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:00.246680    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:00.257011    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:00.257087    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:00.270805    4655 logs.go:276] 0 containers: []
	W0916 04:08:00.270819    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:00.270881    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:00.281301    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:00.281320    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:00.281326    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:00.294970    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:00.294981    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:00.308628    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:00.308638    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:00.320767    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:00.320780    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:00.334819    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:00.334834    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:00.347446    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:00.347459    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:00.360810    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:00.360823    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:00.385908    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:00.385916    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:00.420411    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:00.420417    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:00.490081    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:00.490091    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:00.504120    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:00.504129    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:00.515846    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:00.515856    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:00.527697    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:00.527707    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:00.542297    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:00.542307    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:00.546922    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:00.546928    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:00.567831    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:00.567841    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:00.581004    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:00.581015    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:03.098790    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:08.101713    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:08.102355    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:08.143037    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:08.143193    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:08.164951    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:08.165093    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:08.181014    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:08.181118    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:08.193497    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:08.193575    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:08.204389    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:08.204474    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:08.221154    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:08.221241    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:08.231567    4655 logs.go:276] 0 containers: []
	W0916 04:08:08.231579    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:08.231649    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:08.242017    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:08.242034    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:08.242039    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:08.255909    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:08.255921    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:08.269744    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:08.269756    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:08.287419    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:08.287433    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:08.299712    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:08.299724    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:08.311217    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:08.311228    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:08.315461    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:08.315468    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:08.335931    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:08.335941    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:08.349241    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:08.349252    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:08.383868    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:08.383879    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:08.395237    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:08.395248    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:08.407554    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:08.407564    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:08.441977    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:08.441985    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:08.452964    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:08.452977    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:08.464147    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:08.464159    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:08.475461    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:08.475472    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:08.500479    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:08.500490    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:11.017838    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:16.020293    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:16.020830    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:16.059896    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:16.060062    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:16.081491    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:16.081607    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:16.097099    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:16.097191    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:16.109239    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:16.109326    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:16.120899    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:16.120981    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:16.132392    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:16.132466    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:16.145473    4655 logs.go:276] 0 containers: []
	W0916 04:08:16.145484    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:16.145552    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:16.155906    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:16.155923    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:16.155930    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:16.167213    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:16.167233    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:16.178513    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:16.178525    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:16.190165    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:16.190176    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:16.207778    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:16.207789    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:16.234376    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:16.234385    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:16.247989    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:16.248002    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:16.260798    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:16.260807    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:16.272555    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:16.272566    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:16.290348    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:16.290361    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:16.294706    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:16.294715    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:16.330032    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:16.330042    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:16.346547    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:16.346556    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:16.357915    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:16.357927    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:16.370158    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:16.370168    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:16.406748    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:16.406757    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:16.418129    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:16.418142    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:18.931618    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:23.934456    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:23.935113    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:23.976121    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:23.976277    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:23.998164    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:23.998283    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:24.013309    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:24.013391    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:24.025315    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:24.025402    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:24.035909    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:24.035981    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:24.046587    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:24.046654    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:24.056935    4655 logs.go:276] 0 containers: []
	W0916 04:08:24.056946    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:24.057022    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:24.067823    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:24.067839    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:24.067844    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:24.103701    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:24.103708    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:24.115606    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:24.115616    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:24.126972    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:24.126986    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:24.137959    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:24.137968    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:24.142742    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:24.142747    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:24.180786    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:24.180799    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:24.195055    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:24.195068    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:24.228092    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:24.228103    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:24.247883    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:24.247894    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:24.273183    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:24.273190    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:24.285099    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:24.285111    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:24.302346    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:24.302356    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:24.315964    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:24.315977    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:24.329237    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:24.329250    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:24.341366    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:24.341377    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:24.352663    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:24.352675    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:26.881559    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:31.884260    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:31.884850    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:31.925664    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:31.925848    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:31.947344    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:31.947461    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:31.963394    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:31.963499    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:31.975846    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:31.975932    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:31.988453    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:31.988525    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:31.998700    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:31.998783    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:32.008460    4655 logs.go:276] 0 containers: []
	W0916 04:08:32.008471    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:32.008538    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:32.024936    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:32.024952    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:32.024958    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:32.036902    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:32.036913    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:32.063440    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:32.063450    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:32.077361    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:32.077372    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:32.089754    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:32.089764    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:32.102576    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:32.102586    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:32.114363    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:32.114375    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:32.125218    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:32.125229    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:32.142189    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:32.142201    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:32.146680    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:32.146686    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:32.158005    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:32.158015    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:32.169476    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:32.169488    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:32.181380    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:32.181393    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:32.193617    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:32.193627    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:32.231697    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:32.231708    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:32.269133    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:32.269149    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:32.283698    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:32.283708    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:34.797234    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:39.798149    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:39.798745    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:39.839318    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:39.839487    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:39.861529    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:39.861665    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:39.877571    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:39.877658    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:39.890276    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:39.890359    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:39.901012    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:39.901091    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:39.911444    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:39.911529    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:39.921182    4655 logs.go:276] 0 containers: []
	W0916 04:08:39.921193    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:39.921253    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:39.931699    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:39.931720    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:39.931725    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:39.966012    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:39.966019    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:39.981521    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:39.981532    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:40.007575    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:40.007582    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:40.011762    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:40.011768    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:40.049478    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:40.049494    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:40.063094    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:40.063106    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:40.080469    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:40.080482    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:40.091294    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:40.091304    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:40.102502    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:40.102511    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:40.116191    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:40.116199    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:40.129257    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:40.129267    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:40.141219    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:40.141230    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:40.152678    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:40.152689    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:40.163295    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:40.163307    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:40.174592    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:40.174602    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:40.185951    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:40.185964    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:42.700358    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:47.703073    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:47.703706    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:47.747277    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:47.747445    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:47.767340    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:47.767447    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:47.783893    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:47.783967    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:47.795491    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:47.795573    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:47.805664    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:47.805753    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:47.816754    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:47.816839    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:47.828114    4655 logs.go:276] 0 containers: []
	W0916 04:08:47.828126    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:47.828192    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:47.838742    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:47.838760    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:47.838766    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:47.855853    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:47.855863    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:47.867279    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:47.867290    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:47.879811    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:47.879821    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:47.883874    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:47.883882    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:47.899605    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:47.899616    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:47.913004    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:47.913013    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:47.923673    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:47.923684    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:47.945527    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:47.945536    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:47.956793    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:47.956802    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:47.968792    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:47.968803    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:47.984195    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:47.984206    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:47.995730    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:47.995739    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:48.006819    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:48.006829    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:48.032652    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:48.032660    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:48.068353    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:48.068361    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:48.102171    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:48.102183    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:50.615802    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:08:55.618411    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:08:55.619002    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:08:55.659401    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:08:55.659588    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:08:55.682035    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:08:55.682169    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:08:55.697442    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:08:55.697529    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:08:55.709825    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:08:55.709905    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:08:55.721408    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:08:55.721486    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:08:55.732095    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:08:55.732161    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:08:55.743978    4655 logs.go:276] 0 containers: []
	W0916 04:08:55.743991    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:08:55.744061    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:08:55.754908    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:08:55.754925    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:08:55.754931    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:08:55.780755    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:08:55.780764    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:08:55.794771    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:08:55.794780    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:08:55.806271    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:08:55.806283    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:08:55.817732    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:08:55.817743    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:08:55.829073    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:08:55.829083    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:08:55.847065    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:08:55.847077    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:08:55.861330    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:08:55.861341    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:08:55.873500    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:08:55.873510    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:08:55.888651    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:08:55.888662    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:08:55.900123    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:08:55.900133    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:08:55.918791    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:08:55.918801    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:08:55.931592    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:08:55.931602    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:08:55.965850    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:08:55.965857    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:08:55.970432    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:08:55.970440    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:08:56.004590    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:08:56.004603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:08:56.018506    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:08:56.018516    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:08:58.532426    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:03.534722    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:03.535284    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:03.599344    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:03.599461    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:03.619569    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:03.619666    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:03.631726    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:03.631806    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:03.643183    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:03.643264    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:03.654206    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:03.654282    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:03.665429    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:03.665498    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:03.676206    4655 logs.go:276] 0 containers: []
	W0916 04:09:03.676218    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:03.676289    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:03.686753    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:03.686772    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:03.686777    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:03.698102    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:03.698116    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:03.710146    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:03.710157    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:03.721409    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:03.721423    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:03.733224    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:03.733236    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:03.747926    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:03.747938    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:03.759653    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:03.759667    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:03.770871    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:03.770883    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:03.788678    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:03.788690    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:03.808344    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:03.808355    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:03.831988    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:03.831994    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:03.865994    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:03.866006    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:03.879707    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:03.879719    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:03.891142    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:03.891151    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:03.905864    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:03.905878    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:03.918777    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:03.918788    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:03.955287    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:03.955297    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:06.461413    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:11.464110    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:11.464233    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:11.480366    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:11.480457    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:11.492649    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:11.492734    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:11.505337    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:11.505431    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:11.517214    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:11.517305    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:11.529282    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:11.529371    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:11.541880    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:11.541963    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:11.557520    4655 logs.go:276] 0 containers: []
	W0916 04:09:11.557533    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:11.557605    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:11.570693    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:11.570712    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:11.570718    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:11.584909    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:11.584922    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:11.596996    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:11.597006    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:11.611159    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:11.611172    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:11.623091    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:11.623105    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:11.640380    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:11.640391    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:11.655516    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:11.655529    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:11.668467    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:11.668477    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:11.673483    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:11.673489    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:11.710000    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:11.710014    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:11.722570    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:11.722583    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:11.736076    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:11.736091    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:11.750751    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:11.750762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:11.761753    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:11.761765    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:11.798076    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:11.798085    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:11.809546    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:11.809558    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:11.822942    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:11.822958    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:14.352936    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:19.355638    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:19.355863    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:19.368430    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:19.368520    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:19.380215    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:19.380304    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:19.391287    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:19.391376    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:19.402052    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:19.402137    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:19.412725    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:19.412804    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:19.423611    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:19.423687    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:19.434074    4655 logs.go:276] 0 containers: []
	W0916 04:09:19.434086    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:19.434156    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:19.444625    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:19.444642    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:19.444648    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:19.458007    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:19.458017    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:19.469673    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:19.469685    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:19.481451    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:19.481466    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:19.486295    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:19.486302    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:19.500694    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:19.500702    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:19.516715    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:19.516728    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:19.529287    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:19.529297    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:19.554688    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:19.554696    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:19.568698    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:19.568709    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:19.581240    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:19.581250    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:19.593149    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:19.593159    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:19.605698    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:19.605709    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:19.617911    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:19.617920    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:19.653914    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:19.653924    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:19.665722    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:19.665733    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:19.695914    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:19.695923    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:22.235331    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:27.236529    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:27.236852    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:27.262393    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:27.262527    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:27.278822    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:27.278928    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:27.292076    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:27.292165    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:27.303214    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:27.303300    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:27.317033    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:27.317114    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:27.330559    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:27.330645    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:27.341230    4655 logs.go:276] 0 containers: []
	W0916 04:09:27.341246    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:27.341311    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:27.351619    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:27.351637    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:27.351643    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:27.371987    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:27.371998    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:27.383282    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:27.383293    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:27.395512    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:27.395522    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:27.399755    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:27.399762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:27.416705    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:27.416715    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:27.442276    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:27.442284    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:27.455958    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:27.455967    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:27.471760    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:27.471772    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:27.485439    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:27.485451    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:27.500738    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:27.500748    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:27.512482    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:27.512491    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:27.523987    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:27.523997    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:27.535591    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:27.535601    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:27.570568    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:27.570578    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:27.605408    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:27.605419    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:27.618702    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:27.618712    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:30.133354    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:35.135949    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:35.136211    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:35.153830    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:35.153940    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:35.171318    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:35.171406    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:35.181811    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:35.181885    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:35.192247    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:35.192335    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:35.209816    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:35.209897    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:35.220688    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:35.220776    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:35.230834    4655 logs.go:276] 0 containers: []
	W0916 04:09:35.230847    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:35.230917    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:35.250385    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:35.250402    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:35.250408    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:35.287004    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:35.287011    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:35.291176    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:35.291182    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:35.325389    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:35.325399    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:35.336978    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:35.336993    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:35.357383    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:35.357393    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:35.369237    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:35.369250    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:35.380513    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:35.380524    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:35.392984    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:35.392994    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:35.404458    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:35.404471    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:35.415876    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:35.415886    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:35.439134    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:35.439141    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:35.450825    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:35.450834    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:35.464581    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:35.464593    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:35.478565    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:35.478575    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:35.492239    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:35.492248    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:35.503943    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:35.503955    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:38.023830    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:43.024541    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:43.024677    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:43.037644    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:43.037720    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:43.048584    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:43.048673    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:43.060722    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:43.060814    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:43.072356    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:43.072443    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:43.083282    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:43.083363    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:43.094085    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:43.094168    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:43.104735    4655 logs.go:276] 0 containers: []
	W0916 04:09:43.104746    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:43.104822    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:43.115630    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:43.115648    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:43.115654    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:43.120822    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:43.120833    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:43.134460    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:43.134473    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:43.153345    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:43.153362    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:43.167396    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:43.167408    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:43.183871    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:43.183886    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:43.205539    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:43.205556    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:43.222288    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:43.222300    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:43.237289    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:43.237301    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:43.250016    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:43.250030    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:43.261793    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:43.261803    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:43.273918    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:43.273928    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:43.301313    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:43.301336    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:43.341155    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:43.341171    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:43.378180    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:43.378194    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:43.391849    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:43.391864    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:43.404650    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:43.404661    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:45.921318    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:50.922549    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:50.923200    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:50.962438    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:50.962609    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:50.984357    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:50.984539    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:51.006580    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:51.006671    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:51.018569    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:51.018648    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:51.033118    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:51.033212    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:51.044042    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:51.044129    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:51.054841    4655 logs.go:276] 0 containers: []
	W0916 04:09:51.054853    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:51.054925    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:51.065722    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:51.065740    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:51.065746    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:51.100841    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:51.100851    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:51.112918    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:51.112929    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:51.129635    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:51.129645    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:51.152569    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:51.152576    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:51.165062    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:51.165071    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:51.200118    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:51.200128    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:51.204920    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:51.204930    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:51.224441    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:51.224452    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:51.239986    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:51.239997    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:51.253997    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:51.254007    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:51.265534    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:51.265549    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:51.276624    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:51.276634    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:51.287672    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:51.287682    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:51.298980    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:51.298990    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:51.310965    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:51.310975    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:51.322256    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:51.322270    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:09:53.835800    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:09:58.838029    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:09:58.838322    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:09:58.856564    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:09:58.856685    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:09:58.870321    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:09:58.870409    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:09:58.881788    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:09:58.881865    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:09:58.892107    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:09:58.892196    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:09:58.902578    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:09:58.902666    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:09:58.913112    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:09:58.913203    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:09:58.923936    4655 logs.go:276] 0 containers: []
	W0916 04:09:58.923949    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:09:58.924035    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:09:58.934494    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:09:58.934511    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:09:58.934517    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:09:58.938785    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:09:58.938794    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:09:58.950193    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:09:58.950209    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:09:58.962150    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:09:58.962162    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:09:58.996473    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:09:58.996486    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:09:59.019719    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:09:59.019736    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:09:59.037124    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:09:59.037138    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:09:59.076275    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:09:59.076293    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:09:59.090415    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:09:59.090431    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:09:59.108355    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:09:59.108365    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:09:59.119852    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:09:59.119866    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:09:59.137250    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:09:59.137262    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:09:59.162161    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:09:59.162171    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:09:59.174601    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:09:59.174617    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:09:59.188276    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:09:59.188289    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:09:59.199617    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:09:59.199634    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:09:59.213973    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:09:59.213986    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:01.727966    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:06.730192    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:06.730736    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:06.766583    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:06.766748    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:06.787567    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:06.787695    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:06.802708    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:06.802794    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:06.815317    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:06.815398    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:06.826122    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:06.826195    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:06.837393    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:06.837468    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:06.847964    4655 logs.go:276] 0 containers: []
	W0916 04:10:06.847981    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:06.848051    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:06.859026    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:06.859047    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:06.859051    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:06.870934    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:06.870944    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:06.894838    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:06.894846    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:06.907979    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:06.907989    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:06.922523    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:06.922533    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:06.936304    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:06.936313    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:06.948035    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:06.948045    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:06.959582    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:06.959592    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:06.994480    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:06.994488    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:07.029131    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:07.029145    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:07.041516    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:07.041527    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:07.053435    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:07.053445    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:07.065330    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:07.065340    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:07.076737    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:07.076747    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:07.096648    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:07.096659    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:07.109321    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:07.109332    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:07.113913    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:07.113921    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:09.629838    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:14.632240    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:14.632724    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:14.662937    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:14.663099    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:14.681589    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:14.681676    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:14.695443    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:14.695539    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:14.707631    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:14.707713    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:14.720231    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:14.720305    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:14.730501    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:14.730583    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:14.740662    4655 logs.go:276] 0 containers: []
	W0916 04:10:14.740675    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:14.740748    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:14.751110    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:14.751126    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:14.751131    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:14.764602    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:14.764616    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:14.776975    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:14.776985    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:14.798088    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:14.798101    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:14.815326    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:14.815336    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:14.820326    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:14.820341    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:14.839205    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:14.839216    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:14.851301    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:14.851310    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:14.875353    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:14.875363    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:14.887332    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:14.887342    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:14.922336    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:14.922344    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:14.937351    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:14.937364    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:14.949772    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:14.949783    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:14.962544    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:14.962559    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:15.000051    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:15.000062    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:15.015752    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:15.015762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:15.031189    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:15.031201    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:17.550737    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:22.553394    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:22.553593    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:22.566028    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:22.566109    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:22.579683    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:22.579776    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:22.590286    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:22.590374    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:22.601076    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:22.601158    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:22.614663    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:22.614745    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:22.625747    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:22.625826    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:22.639246    4655 logs.go:276] 0 containers: []
	W0916 04:10:22.639258    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:22.639321    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:22.651554    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:22.651570    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:22.651575    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:22.676563    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:22.676570    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:22.681287    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:22.681294    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:22.697770    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:22.697785    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:22.710524    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:22.710540    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:22.722321    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:22.722337    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:22.734120    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:22.734134    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:22.745871    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:22.745881    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:22.781884    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:22.781896    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:22.794419    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:22.794428    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:22.806872    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:22.806883    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:22.824637    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:22.824650    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:22.862184    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:22.862192    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:22.876618    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:22.876628    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:22.891224    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:22.891232    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:22.903654    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:22.903664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:22.915739    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:22.915748    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:25.429136    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:30.431422    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:30.431952    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:30.466829    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:30.467006    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:30.487831    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:30.487954    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:30.504130    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:30.504222    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:30.520984    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:30.521064    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:30.531751    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:30.531847    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:30.546500    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:30.546582    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:30.556966    4655 logs.go:276] 0 containers: []
	W0916 04:10:30.556978    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:30.557040    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:30.569262    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:30.569284    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:30.569289    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:30.581815    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:30.581828    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:30.616586    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:30.616602    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:30.636552    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:30.636563    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:30.650845    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:30.650853    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:30.662652    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:30.662664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:30.679856    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:30.679865    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:30.691159    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:30.691173    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:30.702168    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:30.702177    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:30.714153    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:30.714163    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:30.718426    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:30.718435    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:30.734321    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:30.734331    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:30.749686    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:30.749697    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:30.784072    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:30.784083    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:30.795999    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:30.796008    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:30.807276    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:30.807286    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:30.818868    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:30.818877    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:33.344851    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:38.345518    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:38.345597    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:38.356528    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:38.356618    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:38.368213    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:38.368296    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:38.379510    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:38.379593    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:38.390220    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:38.390303    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:38.401321    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:38.401406    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:38.415617    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:38.415697    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:38.426627    4655 logs.go:276] 0 containers: []
	W0916 04:10:38.426642    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:38.426716    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:38.437184    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:38.437203    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:38.437208    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:38.472904    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:38.472923    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:38.487648    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:38.487662    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:38.499398    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:38.499408    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:38.511655    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:38.511664    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:38.538095    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:38.538109    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:38.555052    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:38.555069    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:38.568282    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:38.568293    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:38.580905    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:38.580916    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:38.603776    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:38.603787    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:38.619419    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:38.619434    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:38.631317    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:38.631331    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:38.636375    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:38.636381    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:38.674226    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:38.674236    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:38.689179    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:38.689189    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:38.707123    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:38.707135    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:38.721383    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:38.721395    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:41.236157    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:46.236524    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:46.236631    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:46.254140    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:46.254233    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:46.265546    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:46.265634    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:46.278403    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:46.278486    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:46.289056    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:46.289139    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:46.299852    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:46.299931    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:46.310766    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:46.310847    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:46.323345    4655 logs.go:276] 0 containers: []
	W0916 04:10:46.323357    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:46.323427    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:46.334332    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:46.334348    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:46.334354    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:46.347830    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:46.347840    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:46.359825    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:46.359837    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:46.396988    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:46.396996    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:46.414541    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:46.414550    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:46.437628    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:46.437636    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:46.449252    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:46.449266    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:46.487286    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:46.487298    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:46.503535    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:46.503546    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:46.523514    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:46.523525    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:46.535968    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:46.535979    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:46.554543    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:46.554554    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:46.566356    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:46.566366    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:46.582000    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:46.582014    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:46.593933    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:46.593950    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:46.605906    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:46.605919    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:46.610296    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:46.610303    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:49.127988    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:54.130076    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:54.130320    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:54.150603    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:54.150717    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:54.165222    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:54.165312    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:54.176823    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:54.176911    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:54.188093    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:54.188182    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:54.198775    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:54.198850    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:54.209471    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:54.209551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:54.223825    4655 logs.go:276] 0 containers: []
	W0916 04:10:54.223841    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:54.223912    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:54.238524    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:54.238541    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:54.238547    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:54.251308    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:54.251318    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:54.268324    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:54.268334    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:54.281329    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:54.281339    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:54.292767    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:54.292779    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:54.304822    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:54.304835    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:54.316929    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:54.316940    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:54.329342    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:54.329353    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:54.333675    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:54.333681    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:54.345497    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:54.345507    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:54.359331    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:54.359340    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:54.396095    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:54.396104    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:54.410530    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:54.410541    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:54.421986    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:54.421998    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:54.445561    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:54.445570    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:54.479365    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:54.479374    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:54.491062    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:54.491074    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:57.002520    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:02.004865    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:02.004928    4655 kubeadm.go:597] duration metric: took 4m4.27247075s to restartPrimaryControlPlane
	W0916 04:11:02.004992    4655 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 04:11:02.005015    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 04:11:03.013524    4655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008518667s)
	I0916 04:11:03.013601    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 04:11:03.018580    4655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:11:03.021491    4655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:11:03.024421    4655 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:11:03.024427    4655 kubeadm.go:157] found existing configuration files:
	
	I0916 04:11:03.024462    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf
	I0916 04:11:03.027415    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:11:03.027444    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:11:03.029944    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf
	I0916 04:11:03.032303    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:11:03.032328    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:11:03.035067    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf
	I0916 04:11:03.037577    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:11:03.037624    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:11:03.040690    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf
	I0916 04:11:03.043937    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:11:03.044036    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:11:03.047630    4655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 04:11:03.066573    4655 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 04:11:03.066679    4655 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 04:11:03.116708    4655 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 04:11:03.116769    4655 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 04:11:03.116816    4655 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 04:11:03.168358    4655 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 04:11:03.172486    4655 out.go:235]   - Generating certificates and keys ...
	I0916 04:11:03.172524    4655 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 04:11:03.172560    4655 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 04:11:03.172600    4655 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 04:11:03.172649    4655 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 04:11:03.172685    4655 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 04:11:03.172728    4655 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 04:11:03.172770    4655 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 04:11:03.172802    4655 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 04:11:03.172846    4655 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 04:11:03.172886    4655 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 04:11:03.172909    4655 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 04:11:03.172942    4655 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 04:11:03.307629    4655 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 04:11:03.408377    4655 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 04:11:03.518888    4655 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 04:11:03.767888    4655 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 04:11:03.799260    4655 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 04:11:03.799681    4655 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 04:11:03.799720    4655 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 04:11:03.869356    4655 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 04:11:03.872797    4655 out.go:235]   - Booting up control plane ...
	I0916 04:11:03.872849    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 04:11:03.872886    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 04:11:03.872920    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 04:11:03.873337    4655 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 04:11:03.873416    4655 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 04:11:08.375116    4655 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501741 seconds
	I0916 04:11:08.375193    4655 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 04:11:08.379792    4655 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 04:11:08.890669    4655 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 04:11:08.890785    4655 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-588000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 04:11:09.397277    4655 kubeadm.go:310] [bootstrap-token] Using token: boxq8t.ye2mb6w3uyb5n055
	I0916 04:11:09.400922    4655 out.go:235]   - Configuring RBAC rules ...
	I0916 04:11:09.400992    4655 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 04:11:09.401045    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 04:11:09.404600    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 04:11:09.405680    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 04:11:09.406607    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 04:11:09.407624    4655 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 04:11:09.410846    4655 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 04:11:09.584092    4655 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 04:11:09.802137    4655 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 04:11:09.802668    4655 kubeadm.go:310] 
	I0916 04:11:09.802705    4655 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 04:11:09.802715    4655 kubeadm.go:310] 
	I0916 04:11:09.802778    4655 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 04:11:09.802783    4655 kubeadm.go:310] 
	I0916 04:11:09.802799    4655 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 04:11:09.802837    4655 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 04:11:09.802870    4655 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 04:11:09.802875    4655 kubeadm.go:310] 
	I0916 04:11:09.802915    4655 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 04:11:09.802920    4655 kubeadm.go:310] 
	I0916 04:11:09.802948    4655 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 04:11:09.802952    4655 kubeadm.go:310] 
	I0916 04:11:09.802988    4655 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 04:11:09.803037    4655 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 04:11:09.803092    4655 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 04:11:09.803097    4655 kubeadm.go:310] 
	I0916 04:11:09.803145    4655 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 04:11:09.803210    4655 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 04:11:09.803215    4655 kubeadm.go:310] 
	I0916 04:11:09.803276    4655 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token boxq8t.ye2mb6w3uyb5n055 \
	I0916 04:11:09.803347    4655 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 \
	I0916 04:11:09.803362    4655 kubeadm.go:310] 	--control-plane 
	I0916 04:11:09.803368    4655 kubeadm.go:310] 
	I0916 04:11:09.803419    4655 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 04:11:09.803423    4655 kubeadm.go:310] 
	I0916 04:11:09.803467    4655 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token boxq8t.ye2mb6w3uyb5n055 \
	I0916 04:11:09.803530    4655 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 
	I0916 04:11:09.803613    4655 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 04:11:09.803622    4655 cni.go:84] Creating CNI manager for ""
	I0916 04:11:09.803632    4655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:11:09.807395    4655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 04:11:09.811299    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 04:11:09.815828    4655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 04:11:09.821425    4655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 04:11:09.821540    4655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-588000 minikube.k8s.io/updated_at=2024_09_16T04_11_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=running-upgrade-588000 minikube.k8s.io/primary=true
	I0916 04:11:09.821543    4655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 04:11:09.864069    4655 kubeadm.go:1113] duration metric: took 42.590084ms to wait for elevateKubeSystemPrivileges
	I0916 04:11:09.864091    4655 ops.go:34] apiserver oom_adj: -16
	I0916 04:11:09.867820    4655 kubeadm.go:394] duration metric: took 4m12.157761792s to StartCluster
	I0916 04:11:09.867834    4655 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:11:09.867916    4655 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:11:09.868347    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:11:09.868547    4655 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:11:09.868592    4655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 04:11:09.868631    4655 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-588000"
	I0916 04:11:09.868671    4655 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-588000"
	W0916 04:11:09.868678    4655 addons.go:243] addon storage-provisioner should already be in state true
	I0916 04:11:09.868688    4655 host.go:66] Checking if "running-upgrade-588000" exists ...
	I0916 04:11:09.868634    4655 config.go:182] Loaded profile config "running-upgrade-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:11:09.868656    4655 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-588000"
	I0916 04:11:09.868733    4655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-588000"
	I0916 04:11:09.869636    4655 kapi.go:59] client config for running-upgrade-588000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b55800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:11:09.869767    4655 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-588000"
	W0916 04:11:09.869772    4655 addons.go:243] addon default-storageclass should already be in state true
	I0916 04:11:09.869780    4655 host.go:66] Checking if "running-upgrade-588000" exists ...
	I0916 04:11:09.872318    4655 out.go:177] * Verifying Kubernetes components...
	I0916 04:11:09.872739    4655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 04:11:09.873703    4655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 04:11:09.873712    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:11:09.877418    4655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:11:09.881315    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:11:09.885428    4655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:11:09.885466    4655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 04:11:09.885500    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:11:09.956659    4655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:11:09.961911    4655 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:11:09.961960    4655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:11:09.965753    4655 api_server.go:72] duration metric: took 97.197459ms to wait for apiserver process to appear ...
	I0916 04:11:09.965760    4655 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:11:09.965767    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:09.990082    4655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 04:11:10.015639    4655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:11:10.315917    4655 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 04:11:10.315930    4655 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 04:11:14.966367    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:14.966417    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:19.967647    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:19.967678    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:24.967871    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:24.967895    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:29.968125    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:29.968160    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:34.968543    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:34.968583    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:39.969137    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:39.969169    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 04:11:40.317719    4655 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 04:11:40.321929    4655 out.go:177] * Enabled addons: storage-provisioner
	I0916 04:11:40.329882    4655 addons.go:510] duration metric: took 30.461913291s for enable addons: enabled=[storage-provisioner]
	I0916 04:11:44.969901    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:44.969963    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:49.971139    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:49.971218    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:54.972774    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:54.972852    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:59.974620    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:59.974643    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:04.976740    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:04.976776    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:09.978413    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:09.978508    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:10.006827    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:10.006917    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:10.020818    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:10.020896    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:10.032167    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:10.032252    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:10.042487    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:10.042562    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:10.052459    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:10.052551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:10.062891    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:10.062970    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:10.073373    4655 logs.go:276] 0 containers: []
	W0916 04:12:10.073383    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:10.073444    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:10.083937    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:10.083952    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:10.083957    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:10.096607    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:10.096620    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:10.107927    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:10.107938    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:10.112639    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:10.112646    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:10.127263    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:10.127273    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:10.140923    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:10.140938    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:10.152645    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:10.152656    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:10.170741    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:10.170751    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:10.194922    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:10.194929    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:10.233795    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:10.233805    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:10.269293    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:10.269304    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:10.283814    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:10.283823    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:10.296514    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:10.296529    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:12.813095    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:17.815541    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:17.815641    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:17.827252    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:17.827341    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:17.839278    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:17.839368    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:17.850709    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:17.850792    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:17.861721    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:17.861807    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:17.873333    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:17.873418    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:17.884821    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:17.884906    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:17.895952    4655 logs.go:276] 0 containers: []
	W0916 04:12:17.895962    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:17.896032    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:17.906522    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:17.906541    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:17.906546    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:17.918579    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:17.918590    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:17.923164    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:17.923174    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:17.958898    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:17.958913    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:17.972778    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:17.972790    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:17.994146    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:17.994156    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:18.006274    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:18.006285    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:18.033663    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:18.033675    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:18.072591    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:18.072603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:18.087355    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:18.087370    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:18.098856    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:18.098869    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:18.110242    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:18.110254    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:18.134794    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:18.134803    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:20.648364    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:25.648594    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:25.648686    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:25.660140    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:25.660227    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:25.671667    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:25.671751    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:25.682421    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:25.682505    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:25.694095    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:25.694179    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:25.705415    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:25.705502    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:25.717719    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:25.717799    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:25.729037    4655 logs.go:276] 0 containers: []
	W0916 04:12:25.729049    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:25.729129    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:25.740337    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:25.740352    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:25.740356    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:25.753804    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:25.753816    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:25.778412    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:25.778423    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:25.791078    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:25.791091    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:25.805894    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:25.805902    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:25.822203    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:25.822214    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:25.834441    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:25.834456    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:25.848476    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:25.848487    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:25.866456    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:25.866472    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:25.877916    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:25.877927    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:25.895886    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:25.895897    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:25.934621    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:25.934631    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:25.939094    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:25.939100    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:28.475196    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:33.477347    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:33.477447    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:33.489384    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:33.489473    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:33.501077    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:33.501158    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:33.514676    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:33.514760    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:33.526585    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:33.526673    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:33.538439    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:33.538531    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:33.553565    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:33.553655    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:33.566891    4655 logs.go:276] 0 containers: []
	W0916 04:12:33.566903    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:33.566978    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:33.578356    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:33.578375    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:33.578381    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:33.596755    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:33.596769    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:33.609697    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:33.609709    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:33.649007    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:33.649018    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:33.654150    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:33.654156    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:33.711492    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:33.711505    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:33.727630    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:33.727638    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:33.740310    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:33.740321    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:33.765709    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:33.765724    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:33.781589    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:33.781600    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:33.796809    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:33.796823    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:33.814674    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:33.814687    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:33.826860    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:33.826876    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:36.343546    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:41.345607    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:41.345708    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:41.359239    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:41.359356    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:41.370695    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:41.370780    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:41.391733    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:41.391810    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:41.402897    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:41.402975    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:41.414726    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:41.414806    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:41.429838    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:41.429921    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:41.442892    4655 logs.go:276] 0 containers: []
	W0916 04:12:41.442900    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:41.442969    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:41.455510    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:41.455525    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:41.455530    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:41.495562    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:41.495573    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:41.509659    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:41.509671    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:41.522678    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:41.522687    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:41.538494    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:41.538505    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:41.557264    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:41.557276    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:41.570126    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:41.570138    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:41.574873    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:41.574883    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:41.613311    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:41.613321    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:41.628959    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:41.628971    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:41.641716    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:41.641729    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:41.654373    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:41.654386    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:41.666305    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:41.666321    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:44.193276    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:49.195578    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:49.195835    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:49.214246    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:49.214350    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:49.227764    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:49.227861    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:49.239829    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:49.239919    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:49.250149    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:49.250240    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:49.260943    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:49.261031    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:49.272352    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:49.272433    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:49.283620    4655 logs.go:276] 0 containers: []
	W0916 04:12:49.283642    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:49.283724    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:49.294682    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:49.294698    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:49.294704    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:49.320187    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:49.320200    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:49.347142    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:49.347155    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:49.360053    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:49.360069    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:49.398980    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:49.398992    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:49.412126    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:49.412138    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:49.424659    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:49.424669    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:49.438229    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:49.438240    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:49.454379    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:49.454396    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:49.466815    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:49.466831    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:49.506441    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:49.506453    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:49.511691    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:49.511698    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:49.536990    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:49.537001    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:52.054294    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:57.056447    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:57.056654    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:57.073139    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:57.073237    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:57.086275    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:57.086367    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:57.097844    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:57.097933    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:57.108420    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:57.108507    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:57.119436    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:57.119517    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:57.130508    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:57.130590    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:57.141151    4655 logs.go:276] 0 containers: []
	W0916 04:12:57.141163    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:57.141232    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:57.154744    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:57.154762    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:57.154768    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:57.169761    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:57.169771    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:57.188200    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:57.188211    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:57.206874    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:57.206886    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:57.257459    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:57.257471    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:57.272919    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:57.272932    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:57.285335    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:57.285346    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:57.298545    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:57.298558    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:57.311521    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:57.311536    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:57.338615    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:57.338626    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:57.357030    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:57.357041    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:57.399226    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:57.399239    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:57.404636    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:57.404646    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:59.922382    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:04.923381    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:04.923556    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:04.937858    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:04.937951    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:04.949774    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:04.949862    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:04.961689    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:04.961780    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:04.972456    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:04.972534    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:04.984644    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:04.984723    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:04.995129    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:04.995217    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:05.006078    4655 logs.go:276] 0 containers: []
	W0916 04:13:05.006091    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:05.006168    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:05.017124    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:05.017141    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:05.017148    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:05.054785    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:05.054800    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:05.069559    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:05.069569    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:05.082009    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:05.082021    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:05.096731    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:05.096741    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:05.108405    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:05.108416    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:05.134775    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:05.134787    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:05.147800    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:05.147833    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:05.188492    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:05.188503    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:05.204248    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:05.204261    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:05.217091    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:05.217104    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:05.236004    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:05.236018    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:05.250062    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:05.250074    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:07.756955    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:12.759172    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:12.759442    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:12.775584    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:12.775685    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:12.788381    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:12.788455    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:12.798980    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:12.799067    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:12.809304    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:12.809374    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:12.819931    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:12.820019    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:12.830452    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:12.830526    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:12.840506    4655 logs.go:276] 0 containers: []
	W0916 04:13:12.840518    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:12.840591    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:12.851261    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:12.851276    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:12.851281    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:12.866185    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:12.866195    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:12.878097    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:12.878111    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:12.890387    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:12.890396    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:12.916080    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:12.916093    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:12.921022    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:12.921028    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:12.974558    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:12.974570    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:12.988710    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:12.988724    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:13.001323    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:13.001335    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:13.019418    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:13.019434    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:13.031928    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:13.031935    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:13.072670    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:13.072688    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:13.090071    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:13.090089    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:15.604938    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:20.607192    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:20.607472    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:20.626010    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:20.626125    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:20.640506    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:20.640585    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:20.652687    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:20.652776    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:20.663565    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:20.663640    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:20.674195    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:20.674286    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:20.684387    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:20.684470    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:20.694199    4655 logs.go:276] 0 containers: []
	W0916 04:13:20.694209    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:20.694271    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:20.704511    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:20.704527    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:20.704532    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:20.718952    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:20.718962    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:20.740869    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:20.740880    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:20.753450    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:20.753460    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:20.768592    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:20.768603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:20.786138    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:20.786147    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:20.797991    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:20.798001    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:20.802534    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:20.802540    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:20.838253    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:20.838264    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:20.861463    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:20.861470    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:20.881122    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:20.881133    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:20.892550    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:20.892560    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:20.929712    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:20.929720    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:23.443502    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:28.445683    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:28.446052    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:28.471905    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:28.472034    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:28.489358    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:28.489473    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:28.503038    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:28.503130    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:28.515184    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:28.515270    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:28.525330    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:28.525407    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:28.536365    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:28.536443    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:28.546215    4655 logs.go:276] 0 containers: []
	W0916 04:13:28.546226    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:28.546295    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:28.558405    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:28.558425    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:28.558431    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:28.563281    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:28.563287    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:28.598602    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:28.598609    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:28.609593    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:28.609603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:28.625570    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:28.625580    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:28.649150    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:28.649160    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:28.663032    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:28.663042    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:28.674468    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:28.674478    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:28.692018    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:28.692028    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:28.704238    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:28.704249    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:28.739292    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:28.739303    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:28.768978    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:28.768992    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:28.780293    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:28.780304    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:28.791878    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:28.791887    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:28.814112    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:28.814122    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:31.327593    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:36.329597    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:36.329766    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:36.341171    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:36.341252    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:36.351963    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:36.352053    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:36.366594    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:36.366679    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:36.376838    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:36.376932    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:36.387622    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:36.387695    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:36.402496    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:36.402568    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:36.412821    4655 logs.go:276] 0 containers: []
	W0916 04:13:36.412832    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:36.412903    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:36.423404    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:36.423420    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:36.423425    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:36.440360    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:36.440369    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:36.477520    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:36.477532    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:36.489131    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:36.489141    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:36.508926    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:36.508941    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:36.514922    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:36.514930    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:36.530679    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:36.530690    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:36.541896    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:36.541906    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:36.577910    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:36.577918    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:36.592220    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:36.592231    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:36.603696    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:36.603706    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:36.628189    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:36.628201    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:36.639953    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:36.639965    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:36.655441    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:36.655451    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:36.667249    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:36.667260    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:39.194035    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:44.196319    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:44.196453    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:44.209114    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:44.209202    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:44.220307    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:44.220396    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:44.231691    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:44.231778    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:44.242953    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:44.243033    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:44.254077    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:44.254162    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:44.264496    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:44.264573    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:44.275177    4655 logs.go:276] 0 containers: []
	W0916 04:13:44.275191    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:44.275265    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:44.286171    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:44.286188    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:44.286193    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:44.320148    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:44.320159    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:44.334181    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:44.334191    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:44.348277    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:44.348286    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:44.353207    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:44.353212    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:44.366878    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:44.366887    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:44.384497    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:44.384507    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:44.396083    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:44.396094    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:44.408101    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:44.408117    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:44.431404    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:44.431413    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:44.443692    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:44.443706    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:44.481890    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:44.481902    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:44.494259    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:44.494272    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:44.506062    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:44.506073    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:44.517987    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:44.517997    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:47.034593    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:52.036845    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:52.037082    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:52.055242    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:52.055361    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:52.069098    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:52.069189    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:52.080338    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:52.080423    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:52.090817    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:52.090898    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:52.101533    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:52.101613    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:52.114376    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:52.114454    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:52.124086    4655 logs.go:276] 0 containers: []
	W0916 04:13:52.124098    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:52.124170    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:52.134887    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:52.134905    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:52.134911    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:52.149915    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:52.149924    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:52.165530    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:52.165543    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:52.183216    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:52.183225    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:52.195275    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:52.195286    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:52.207169    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:52.207181    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:52.232796    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:52.232809    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:52.237529    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:52.237540    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:52.273296    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:52.273308    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:52.285654    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:52.285664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:52.300331    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:52.300341    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:52.339452    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:52.339462    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:52.351533    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:52.351543    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:52.363087    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:52.363099    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:52.378079    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:52.378089    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:54.892341    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:59.894949    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:59.895228    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:59.919076    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:59.919192    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:59.938007    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:59.938093    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:59.950763    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:59.950852    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:59.961721    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:59.961794    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:59.971954    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:59.972036    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:59.982594    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:59.982682    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:59.994133    4655 logs.go:276] 0 containers: []
	W0916 04:13:59.994145    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:59.994211    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:00.004672    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:00.004688    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:00.004695    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:00.019254    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:00.019265    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:00.044654    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:00.044665    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:00.056573    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:00.056583    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:00.068758    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:00.068770    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:00.079906    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:00.079918    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:00.095298    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:00.095308    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:00.107402    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:00.107413    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:00.148946    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:00.148954    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:00.184027    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:00.184040    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:00.195897    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:00.195907    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:00.200672    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:00.200678    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:00.214982    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:00.214996    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:00.232899    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:00.232910    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:00.244898    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:00.244910    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:02.758694    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:07.760581    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:07.760791    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:07.787468    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:07.787566    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:07.806680    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:07.806765    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:07.816993    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:07.817075    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:07.827950    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:07.828032    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:07.839406    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:07.839490    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:07.850397    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:07.850479    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:07.860578    4655 logs.go:276] 0 containers: []
	W0916 04:14:07.860592    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:07.860674    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:07.871404    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:07.871422    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:07.871428    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:07.907321    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:07.907334    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:07.921344    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:07.921354    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:07.938146    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:07.938157    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:07.949709    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:07.949718    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:07.966402    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:07.966415    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:07.977700    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:07.977713    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:07.992668    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:07.992680    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:08.010170    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:08.010179    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:08.035219    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:08.035228    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:08.047364    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:08.047376    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:08.064658    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:08.064667    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:08.076214    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:08.076222    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:08.113442    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:08.113451    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:08.117607    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:08.117613    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:10.630656    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:15.631435    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:15.631727    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:15.658624    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:15.658768    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:15.674720    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:15.674820    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:15.687650    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:15.687743    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:15.698638    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:15.698713    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:15.708948    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:15.709031    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:15.719814    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:15.719896    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:15.730718    4655 logs.go:276] 0 containers: []
	W0916 04:14:15.730731    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:15.730805    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:15.741525    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:15.741542    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:15.741548    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:15.746131    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:15.746138    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:15.779640    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:15.779656    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:15.791318    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:15.791329    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:15.802611    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:15.802620    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:15.822251    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:15.822264    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:15.835593    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:15.835606    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:15.854086    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:15.854097    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:15.866405    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:15.866418    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:15.877642    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:15.877652    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:15.888853    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:15.888865    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:15.925795    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:15.925804    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:15.939608    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:15.939620    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:15.957264    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:15.957274    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:15.981857    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:15.981865    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:18.494987    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:23.495557    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:23.495794    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:23.517652    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:23.517762    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:23.542139    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:23.542227    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:23.554202    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:23.554287    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:23.565166    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:23.565248    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:23.575334    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:23.575407    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:23.585852    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:23.585923    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:23.595972    4655 logs.go:276] 0 containers: []
	W0916 04:14:23.595986    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:23.596058    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:23.606271    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:23.606288    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:23.606292    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:23.642978    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:23.642990    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:23.656564    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:23.656575    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:23.681395    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:23.681402    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:23.698480    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:23.698491    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:23.710543    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:23.710554    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:23.722165    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:23.722178    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:23.726420    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:23.726427    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:23.747055    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:23.747068    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:23.762120    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:23.762130    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:23.800820    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:23.800833    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:23.819678    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:23.819692    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:23.831891    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:23.831901    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:23.846147    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:23.846158    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:23.866208    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:23.866219    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:26.380248    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:31.381211    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:31.381457    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:31.400937    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:31.401054    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:31.415247    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:31.415346    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:31.427700    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:31.427789    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:31.446227    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:31.446309    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:31.456934    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:31.457008    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:31.467054    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:31.467123    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:31.477749    4655 logs.go:276] 0 containers: []
	W0916 04:14:31.477761    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:31.477834    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:31.497740    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:31.497758    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:31.497764    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:31.511419    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:31.511432    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:31.522749    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:31.522760    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:31.534885    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:31.534897    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:31.539246    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:31.539253    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:31.553549    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:31.553558    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:31.565657    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:31.565667    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:31.577061    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:31.577071    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:31.591643    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:31.591653    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:31.628841    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:31.628850    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:31.640610    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:31.640621    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:31.658145    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:31.658158    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:31.693039    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:31.693049    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:31.708853    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:31.708864    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:31.720709    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:31.720722    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:34.246053    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:39.248358    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:39.248840    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:39.281113    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:39.281266    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:39.301075    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:39.301188    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:39.315371    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:39.315467    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:39.327380    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:39.327467    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:39.338288    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:39.338371    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:39.349834    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:39.349911    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:39.359852    4655 logs.go:276] 0 containers: []
	W0916 04:14:39.359864    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:39.359935    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:39.370644    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:39.370661    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:39.370666    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:39.385983    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:39.385993    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:39.398010    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:39.398021    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:39.442034    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:39.442046    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:39.458127    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:39.458145    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:39.469697    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:39.469707    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:39.481355    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:39.481366    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:39.498353    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:39.498364    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:39.510135    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:39.510146    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:39.549625    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:39.549640    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:39.561608    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:39.561617    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:39.586092    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:39.586105    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:39.590971    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:39.590978    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:39.602364    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:39.602374    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:39.614439    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:39.614451    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:42.130898    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:47.133100    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:47.133288    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:47.150554    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:47.150648    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:47.162736    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:47.162815    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:47.173322    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:47.173406    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:47.183999    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:47.184083    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:47.206293    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:47.206377    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:47.217090    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:47.217167    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:47.227174    4655 logs.go:276] 0 containers: []
	W0916 04:14:47.227185    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:47.227256    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:47.238138    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:47.238155    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:47.238161    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:47.275326    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:47.275336    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:47.280400    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:47.280408    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:47.295332    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:47.295345    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:47.312846    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:47.312857    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:47.337775    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:47.337783    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:47.372659    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:47.372671    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:47.384513    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:47.384525    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:47.396160    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:47.396171    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:47.412178    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:47.412190    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:47.426493    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:47.426503    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:47.440935    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:47.440949    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:47.455763    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:47.455776    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:47.471325    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:47.471335    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:47.483711    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:47.483722    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:49.997039    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:54.998151    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:54.998262    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:55.011537    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:55.011632    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:55.024232    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:55.024319    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:55.036484    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:55.036551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:55.048969    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:55.049022    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:55.060568    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:55.060649    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:55.071319    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:55.071403    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:55.082957    4655 logs.go:276] 0 containers: []
	W0916 04:14:55.082971    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:55.083044    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:55.094916    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:55.094934    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:55.094939    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:55.132764    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:55.132783    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:55.146043    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:55.146054    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:55.158415    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:55.158426    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:55.170276    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:55.170288    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:55.189043    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:55.189058    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:55.201355    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:55.201367    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:55.226783    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:55.226802    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:55.244037    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:55.244053    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:55.256630    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:55.256641    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:55.261415    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:55.261427    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:55.277385    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:55.277404    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:55.297948    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:55.297959    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:55.334680    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:55.334693    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:55.348828    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:55.348844    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:57.863652    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:02.865915    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:02.866160    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:15:02.881822    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:15:02.881923    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:15:02.894111    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:15:02.894201    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:15:02.905689    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:15:02.905778    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:15:02.918912    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:15:02.918986    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:15:02.929911    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:15:02.929996    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:15:02.940540    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:15:02.940615    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:15:02.950287    4655 logs.go:276] 0 containers: []
	W0916 04:15:02.950299    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:15:02.950368    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:15:02.960278    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:15:02.960293    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:15:02.960300    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:15:02.975059    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:15:02.975069    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:15:02.994501    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:15:02.994511    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:15:03.033510    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:15:03.033518    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:15:03.047631    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:15:03.047643    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:15:03.059226    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:15:03.059237    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:15:03.071052    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:15:03.071062    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:15:03.120496    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:15:03.120506    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:15:03.134752    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:15:03.134762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:15:03.148329    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:15:03.148341    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:15:03.160234    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:15:03.160245    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:15:03.184613    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:15:03.184624    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:15:03.188981    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:15:03.188987    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:15:03.211810    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:15:03.211816    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:15:03.223853    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:15:03.223862    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:15:05.738145    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:10.740298    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:10.744378    4655 out.go:201] 
	W0916 04:15:10.747150    4655 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 04:15:10.747156    4655 out.go:270] * 
	* 
	W0916 04:15:10.747641    4655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:15:10.759296    4655 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-588000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-16 04:15:10.864956 -0700 PDT m=+3326.898513668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-588000 -n running-upgrade-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-588000 -n running-upgrade-588000: exit status 2 (15.730185292s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-588000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-622000          | force-systemd-flag-622000 | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-899000              | force-systemd-env-899000  | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-899000           | force-systemd-env-899000  | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT | 16 Sep 24 04:05 PDT |
	| start   | -p docker-flags-354000                | docker-flags-354000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-622000             | force-systemd-flag-622000 | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-622000          | force-systemd-flag-622000 | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT | 16 Sep 24 04:05 PDT |
	| start   | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-354000 ssh               | docker-flags-354000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-354000 ssh               | docker-flags-354000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-354000                | docker-flags-354000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT | 16 Sep 24 04:05 PDT |
	| start   | -p cert-options-779000                | cert-options-779000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-779000 ssh               | cert-options-779000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-779000 -- sudo        | cert-options-779000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-779000                | cert-options-779000       | jenkins | v1.34.0 | 16 Sep 24 04:05 PDT | 16 Sep 24 04:05 PDT |
	| start   | -p running-upgrade-588000             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 04:05 PDT | 16 Sep 24 04:06 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-588000             | running-upgrade-588000    | jenkins | v1.34.0 | 16 Sep 24 04:06 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.34.0 | 16 Sep 24 04:08 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.34.0 | 16 Sep 24 04:08 PDT | 16 Sep 24 04:08 PDT |
	| start   | -p kubernetes-upgrade-711000          | kubernetes-upgrade-711000 | jenkins | v1.34.0 | 16 Sep 24 04:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-711000          | kubernetes-upgrade-711000 | jenkins | v1.34.0 | 16 Sep 24 04:09 PDT | 16 Sep 24 04:09 PDT |
	| start   | -p kubernetes-upgrade-711000          | kubernetes-upgrade-711000 | jenkins | v1.34.0 | 16 Sep 24 04:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711000          | kubernetes-upgrade-711000 | jenkins | v1.34.0 | 16 Sep 24 04:09 PDT | 16 Sep 24 04:09 PDT |
	| start   | -p stopped-upgrade-716000             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 04:09 PDT | 16 Sep 24 04:10 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-716000 stop           | minikube                  | jenkins | v1.26.0 | 16 Sep 24 04:10 PDT | 16 Sep 24 04:10 PDT |
	| start   | -p stopped-upgrade-716000             | stopped-upgrade-716000    | jenkins | v1.34.0 | 16 Sep 24 04:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 04:10:14
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 04:10:14.829774    4792 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:10:14.829943    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:10:14.829947    4792 out.go:358] Setting ErrFile to fd 2...
	I0916 04:10:14.829950    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:10:14.830079    4792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:10:14.831203    4792 out.go:352] Setting JSON to false
	I0916 04:10:14.850133    4792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4177,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:10:14.850242    4792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:10:14.854281    4792 out.go:177] * [stopped-upgrade-716000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:10:14.873461    4792 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:10:14.873474    4792 notify.go:220] Checking for updates...
	I0916 04:10:14.880317    4792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:10:14.883295    4792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:10:14.886328    4792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:10:14.889330    4792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:10:14.890371    4792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:10:14.893643    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:10:14.897270    4792 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 04:10:14.900307    4792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:10:14.904313    4792 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:10:14.911292    4792 start.go:297] selected driver: qemu2
	I0916 04:10:14.911300    4792 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:14.911361    4792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:10:14.914168    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:10:14.914206    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:10:14.914226    4792 start.go:340] cluster config:
	{Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:14.914283    4792 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:10:14.921319    4792 out.go:177] * Starting "stopped-upgrade-716000" primary control-plane node in "stopped-upgrade-716000" cluster
	I0916 04:10:14.925286    4792 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:10:14.925317    4792 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 04:10:14.925329    4792 cache.go:56] Caching tarball of preloaded images
	I0916 04:10:14.925416    4792 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:10:14.925422    4792 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 04:10:14.925483    4792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/config.json ...
	I0916 04:10:14.925884    4792 start.go:360] acquireMachinesLock for stopped-upgrade-716000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:10:14.925921    4792 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "stopped-upgrade-716000"
	I0916 04:10:14.925931    4792 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:10:14.925936    4792 fix.go:54] fixHost starting: 
	I0916 04:10:14.926045    4792 fix.go:112] recreateIfNeeded on stopped-upgrade-716000: state=Stopped err=<nil>
	W0916 04:10:14.926054    4792 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:10:14.930360    4792 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-716000" ...
	I0916 04:10:14.632240    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:14.632724    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:14.662937    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:14.663099    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:14.681589    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:14.681676    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:14.695443    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:14.695539    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:14.707631    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:14.707713    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:14.720231    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:14.720305    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:14.730501    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:14.730583    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:14.740662    4655 logs.go:276] 0 containers: []
	W0916 04:10:14.740675    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:14.740748    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:14.751110    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:14.751126    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:14.751131    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:14.764602    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:14.764616    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:14.776975    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:14.776985    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:14.798088    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:14.798101    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:14.815326    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:14.815336    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:14.820326    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:14.820341    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:14.839205    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:14.839216    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:14.851301    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:14.851310    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:14.875353    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:14.875363    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:14.887332    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:14.887342    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:14.922336    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:14.922344    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:14.937351    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:14.937364    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:14.949772    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:14.949783    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:14.962544    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:14.962559    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:15.000051    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:15.000062    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:15.015752    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:15.015762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:15.031189    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:15.031201    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:17.550737    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:14.938521    4792 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:10:14.938636    4792 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50481-:22,hostfwd=tcp::50482-:2376,hostname=stopped-upgrade-716000 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/disk.qcow2
	I0916 04:10:14.986361    4792 main.go:141] libmachine: STDOUT: 
	I0916 04:10:14.986388    4792 main.go:141] libmachine: STDERR: 
	I0916 04:10:14.986396    4792 main.go:141] libmachine: Waiting for VM to start (ssh -p 50481 docker@127.0.0.1)...
	I0916 04:10:22.553394    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:22.553593    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:22.566028    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:22.566109    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:22.579683    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:22.579776    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:22.590286    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:22.590374    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:22.601076    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:22.601158    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:22.614663    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:22.614745    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:22.625747    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:22.625826    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:22.639246    4655 logs.go:276] 0 containers: []
	W0916 04:10:22.639258    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:22.639321    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:22.651554    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:22.651570    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:22.651575    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:22.676563    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:22.676570    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:22.681287    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:22.681294    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:22.697770    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:22.697785    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:22.710524    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:22.710540    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:22.722321    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:22.722337    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:22.734120    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:22.734134    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:22.745871    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:22.745881    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:22.781884    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:22.781896    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:22.794419    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:22.794428    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:22.806872    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:22.806883    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:22.824637    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:22.824650    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:22.862184    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:22.862192    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:22.876618    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:22.876628    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:22.891224    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:22.891232    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:22.903654    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:22.903664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:22.915739    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:22.915748    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:25.429136    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:30.431422    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:30.431952    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:30.466829    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:30.467006    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:30.487831    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:30.487954    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:30.504130    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:30.504222    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:30.520984    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:30.521064    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:30.531751    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:30.531847    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:30.546500    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:30.546582    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:30.556966    4655 logs.go:276] 0 containers: []
	W0916 04:10:30.556978    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:30.557040    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:30.569262    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:30.569284    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:30.569289    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:30.581815    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:30.581828    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:30.616586    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:30.616602    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:30.636552    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:30.636563    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:30.650845    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:30.650853    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:30.662652    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:30.662664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:30.679856    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:30.679865    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:30.691159    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:30.691173    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:30.702168    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:30.702177    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:30.714153    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:30.714163    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:30.718426    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:30.718435    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:30.734321    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:30.734331    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:30.749686    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:30.749697    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:30.784072    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:30.784083    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:30.795999    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:30.796008    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:30.807276    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:30.807286    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:30.818868    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:30.818877    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:33.344851    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:38.345518    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:38.345597    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:38.356528    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:35.372200    4792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/config.json ...
	I0916 04:10:35.372487    4792 machine.go:93] provisionDockerMachine start ...
	I0916 04:10:35.372543    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.372698    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.372704    4792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 04:10:35.436053    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 04:10:35.436067    4792 buildroot.go:166] provisioning hostname "stopped-upgrade-716000"
	I0916 04:10:35.436123    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.436243    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.436250    4792 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-716000 && echo "stopped-upgrade-716000" | sudo tee /etc/hostname
	I0916 04:10:35.504520    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-716000
	
	I0916 04:10:35.504584    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.504701    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.504710    4792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-716000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-716000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-716000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 04:10:35.570907    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 04:10:35.570921    4792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1133/.minikube}
	I0916 04:10:35.570930    4792 buildroot.go:174] setting up certificates
	I0916 04:10:35.570942    4792 provision.go:84] configureAuth start
	I0916 04:10:35.570948    4792 provision.go:143] copyHostCerts
	I0916 04:10:35.571033    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem, removing ...
	I0916 04:10:35.571056    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem
	I0916 04:10:35.571171    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem (1078 bytes)
	I0916 04:10:35.571387    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem, removing ...
	I0916 04:10:35.571391    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem
	I0916 04:10:35.571448    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem (1123 bytes)
	I0916 04:10:35.571577    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem, removing ...
	I0916 04:10:35.571581    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem
	I0916 04:10:35.571634    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem (1675 bytes)
	I0916 04:10:35.571742    4792 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-716000 san=[127.0.0.1 localhost minikube stopped-upgrade-716000]
	I0916 04:10:35.612126    4792 provision.go:177] copyRemoteCerts
	I0916 04:10:35.612170    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 04:10:35.612179    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:35.645715    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 04:10:35.652491    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 04:10:35.658974    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 04:10:35.666383    4792 provision.go:87] duration metric: took 95.431667ms to configureAuth
	I0916 04:10:35.666393    4792 buildroot.go:189] setting minikube options for container-runtime
	I0916 04:10:35.666512    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:10:35.666553    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.666633    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.666638    4792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 04:10:35.728238    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 04:10:35.728250    4792 buildroot.go:70] root file system type: tmpfs
	I0916 04:10:35.728310    4792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 04:10:35.728372    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.728490    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.728524    4792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 04:10:35.791394    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 04:10:35.791448    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.791554    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.791562    4792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 04:10:36.152464    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 04:10:36.152477    4792 machine.go:96] duration metric: took 779.999542ms to provisionDockerMachine
	I0916 04:10:36.152488    4792 start.go:293] postStartSetup for "stopped-upgrade-716000" (driver="qemu2")
	I0916 04:10:36.152495    4792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 04:10:36.152570    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 04:10:36.152580    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:36.185692    4792 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 04:10:36.187082    4792 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 04:10:36.187090    4792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/addons for local assets ...
	I0916 04:10:36.187169    4792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/files for local assets ...
	I0916 04:10:36.187293    4792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0916 04:10:36.187422    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 04:10:36.190535    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:10:36.197758    4792 start.go:296] duration metric: took 45.264792ms for postStartSetup
	I0916 04:10:36.197771    4792 fix.go:56] duration metric: took 21.272257375s for fixHost
	I0916 04:10:36.197815    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:36.197914    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:36.197919    4792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 04:10:36.259771    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485036.505115171
	
	I0916 04:10:36.259781    4792 fix.go:216] guest clock: 1726485036.505115171
	I0916 04:10:36.259785    4792 fix.go:229] Guest: 2024-09-16 04:10:36.505115171 -0700 PDT Remote: 2024-09-16 04:10:36.197773 -0700 PDT m=+21.390020167 (delta=307.342171ms)
	I0916 04:10:36.259800    4792 fix.go:200] guest clock delta is within tolerance: 307.342171ms
	I0916 04:10:36.259802    4792 start.go:83] releasing machines lock for "stopped-upgrade-716000", held for 21.334299s
	I0916 04:10:36.259874    4792 ssh_runner.go:195] Run: cat /version.json
	I0916 04:10:36.259887    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:36.259874    4792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 04:10:36.259947    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	W0916 04:10:36.260449    4792 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50481: connect: connection refused
	I0916 04:10:36.260468    4792 retry.go:31] will retry after 350.641498ms: dial tcp [::1]:50481: connect: connection refused
	W0916 04:10:36.657816    4792 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 04:10:36.657947    4792 ssh_runner.go:195] Run: systemctl --version
	I0916 04:10:36.661457    4792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 04:10:36.664276    4792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 04:10:36.664333    4792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 04:10:36.668809    4792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 04:10:36.675230    4792 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 04:10:36.675244    4792 start.go:495] detecting cgroup driver to use...
	I0916 04:10:36.675349    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:10:36.683943    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 04:10:36.687805    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 04:10:36.691396    4792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 04:10:36.691443    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 04:10:36.695009    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:10:36.698327    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 04:10:36.701430    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:10:36.704213    4792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 04:10:36.706995    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 04:10:36.710192    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 04:10:36.713408    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 04:10:36.716271    4792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 04:10:36.718990    4792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 04:10:36.722023    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:36.798999    4792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 04:10:36.809636    4792 start.go:495] detecting cgroup driver to use...
	I0916 04:10:36.809715    4792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 04:10:36.815918    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:10:36.824467    4792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 04:10:36.832848    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:10:36.837356    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 04:10:36.841821    4792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 04:10:36.898919    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 04:10:36.904247    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:10:36.909578    4792 ssh_runner.go:195] Run: which cri-dockerd
	I0916 04:10:36.910765    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 04:10:36.913585    4792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 04:10:36.918492    4792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 04:10:37.000460    4792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 04:10:37.083462    4792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 04:10:37.083525    4792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 04:10:37.088754    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:37.159614    4792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:10:38.324039    4792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164430083s)
	I0916 04:10:38.324123    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 04:10:38.331064    4792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 04:10:38.337083    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:10:38.341564    4792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 04:10:38.410705    4792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 04:10:38.510735    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:38.597988    4792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 04:10:38.604209    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:10:38.609398    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:38.697740    4792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 04:10:38.741105    4792 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 04:10:38.741206    4792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 04:10:38.743896    4792 start.go:563] Will wait 60s for crictl version
	I0916 04:10:38.743960    4792 ssh_runner.go:195] Run: which crictl
	I0916 04:10:38.745641    4792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 04:10:38.760469    4792 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 04:10:38.760554    4792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:10:38.776729    4792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:10:38.794577    4792 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 04:10:38.794656    4792 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 04:10:38.795978    4792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 04:10:38.799575    4792 kubeadm.go:883] updating cluster {Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 04:10:38.799629    4792 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:10:38.799680    4792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:10:38.810562    4792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:10:38.810574    4792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:10:38.810630    4792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:10:38.814256    4792 ssh_runner.go:195] Run: which lz4
	I0916 04:10:38.815620    4792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 04:10:38.817025    4792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 04:10:38.817040    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 04:10:39.778895    4792 docker.go:649] duration metric: took 963.337166ms to copy over tarball
	I0916 04:10:39.778963    4792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 04:10:38.356618    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:38.368213    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:38.368296    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:38.379510    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:38.379593    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:38.390220    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:38.390303    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:38.401321    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:38.401406    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:38.415617    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:38.415697    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:38.426627    4655 logs.go:276] 0 containers: []
	W0916 04:10:38.426642    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:38.426716    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:38.437184    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:38.437203    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:38.437208    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:38.472904    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:38.472923    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:38.487648    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:38.487662    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:38.499398    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:38.499408    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:38.511655    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:38.511664    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:38.538095    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:38.538109    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:38.555052    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:38.555069    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:38.568282    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:38.568293    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:38.580905    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:38.580916    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:38.603776    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:38.603787    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:38.619419    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:38.619434    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:38.631317    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:38.631331    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:38.636375    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:38.636381    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:38.674226    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:38.674236    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:38.689179    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:38.689189    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:38.707123    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:38.707135    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:38.721383    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:38.721395    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:41.236157    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:41.087279    4792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.308327667s)
	I0916 04:10:41.087293    4792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 04:10:41.103841    4792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:10:41.106960    4792 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 04:10:41.112105    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:41.197571    4792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:10:42.466763    4792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.269199583s)
	I0916 04:10:42.466889    4792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:10:42.479495    4792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:10:42.479505    4792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:10:42.479509    4792 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 04:10:42.483843    4792 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:42.485098    4792 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.487292    4792 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:42.487481    4792 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.489627    4792 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.489680    4792 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.490943    4792 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.491251    4792 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.491747    4792 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.492726    4792 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.493109    4792 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:42.494079    4792 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 04:10:42.494179    4792 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.494203    4792 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.496104    4792 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:42.496104    4792 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 04:10:42.870579    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.881581    4792 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 04:10:42.881608    4792 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.881674    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.891642    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 04:10:42.905347    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.915362    4792 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 04:10:42.915381    4792 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.915451    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.925845    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0916 04:10:42.933668    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.943469    4792 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 04:10:42.943489    4792 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.943552    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.954051    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 04:10:42.965669    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.968683    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985507    4792 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 04:10:42.985529    4792 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985596    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985628    4792 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 04:10:42.985638    4792 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.985671    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:43.000218    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 04:10:43.000294    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0916 04:10:43.001260    4792 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 04:10:43.001320    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 04:10:43.001369    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.014793    4792 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 04:10:43.014816    4792 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 04:10:43.014886    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 04:10:43.019935    4792 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 04:10:43.019955    4792 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.020017    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.028253    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 04:10:43.028394    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 04:10:43.032743    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 04:10:43.032855    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:10:43.033838    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 04:10:43.033850    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 04:10:43.034161    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 04:10:43.034170    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 04:10:43.041978    4792 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 04:10:43.041999    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 04:10:43.097711    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0916 04:10:43.097737    4792 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:10:43.097753    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 04:10:43.142575    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 04:10:43.320995    4792 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 04:10:43.321160    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.334381    4792 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 04:10:43.334407    4792 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.334488    4792 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.348862    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 04:10:43.349008    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:10:43.350401    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 04:10:43.350413    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 04:10:43.380219    4792 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:10:43.380234    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 04:10:43.630642    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 04:10:43.630675    4792 cache_images.go:92] duration metric: took 1.151181875s to LoadCachedImages
	W0916 04:10:43.630710    4792 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0916 04:10:43.630717    4792 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 04:10:43.630773    4792 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-716000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 04:10:43.630844    4792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 04:10:43.644035    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:10:43.644056    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:10:43.644062    4792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 04:10:43.644072    4792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-716000 NodeName:stopped-upgrade-716000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 04:10:43.644159    4792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-716000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 04:10:43.644240    4792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 04:10:43.647054    4792 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 04:10:43.647088    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 04:10:43.650147    4792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 04:10:43.655288    4792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 04:10:43.660361    4792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 04:10:43.665472    4792 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 04:10:43.666892    4792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 04:10:43.670798    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:43.757491    4792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:10:43.764316    4792 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000 for IP: 10.0.2.15
	I0916 04:10:43.764325    4792 certs.go:194] generating shared ca certs ...
	I0916 04:10:43.764335    4792 certs.go:226] acquiring lock for ca certs: {Name:mk7bbdd60870074cef3b6b7f58dae6ae1dc0ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.764516    4792 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key
	I0916 04:10:43.764568    4792 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key
	I0916 04:10:43.764575    4792 certs.go:256] generating profile certs ...
	I0916 04:10:43.764651    4792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key
	I0916 04:10:43.764670    4792 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31
	I0916 04:10:43.764678    4792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 04:10:43.853550    4792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 ...
	I0916 04:10:43.853562    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31: {Name:mke3c93083ff8ba32761762450527a69939c89bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.854113    4792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31 ...
	I0916 04:10:43.854120    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31: {Name:mkd50dd7bba0e5318d7c3f16600658e8553bb63f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.854277    4792 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt
	I0916 04:10:43.854402    4792 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key
	I0916 04:10:43.854557    4792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.key
	I0916 04:10:43.854705    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem (1338 bytes)
	W0916 04:10:43.854736    4792 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0916 04:10:43.854742    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 04:10:43.854768    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem (1078 bytes)
	I0916 04:10:43.854786    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem (1123 bytes)
	I0916 04:10:43.854804    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem (1675 bytes)
	I0916 04:10:43.854869    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:10:43.855273    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 04:10:43.863014    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 04:10:43.873402    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 04:10:43.881474    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 04:10:43.889061    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 04:10:43.895285    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 04:10:43.902152    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 04:10:43.909558    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 04:10:43.916479    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 04:10:43.923077    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0916 04:10:43.930380    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0916 04:10:43.937499    4792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 04:10:43.942353    4792 ssh_runner.go:195] Run: openssl version
	I0916 04:10:43.944222    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 04:10:43.947025    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.948459    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.948486    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.950075    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 04:10:43.953010    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0916 04:10:43.955818    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.957112    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:35 /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.957135    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.959484    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0916 04:10:43.962589    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0916 04:10:43.966007    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.967488    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:35 /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.967509    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.969399    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 04:10:43.972210    4792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 04:10:43.973529    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 04:10:43.975722    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 04:10:43.977480    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 04:10:43.979254    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 04:10:43.980970    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 04:10:43.982628    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 04:10:43.984501    4792 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:43.984579    4792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:10:43.995167    4792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 04:10:43.998332    4792 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 04:10:43.998344    4792 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 04:10:43.998371    4792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 04:10:44.001736    4792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:10:44.002043    4792 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-716000" does not appear in /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:10:44.002156    4792 kubeconfig.go:62] /Users/jenkins/minikube-integration/19651-1133/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-716000" cluster setting kubeconfig missing "stopped-upgrade-716000" context setting]
	I0916 04:10:44.002387    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:44.002819    4792 kapi.go:59] client config for stopped-upgrade-716000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:10:44.003153    4792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 04:10:44.005880    4792 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-716000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 04:10:44.005888    4792 kubeadm.go:1160] stopping kube-system containers ...
	I0916 04:10:44.005935    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:10:44.016566    4792 docker.go:483] Stopping containers: [97104cca0786 40ab2f675d22 03acb758d55b 1973f852f436 dbc4a78c163a a460b4b3d0a7 609e5463648c 7353da114ab2 fe7b62f74c09]
	I0916 04:10:44.016648    4792 ssh_runner.go:195] Run: docker stop 97104cca0786 40ab2f675d22 03acb758d55b 1973f852f436 dbc4a78c163a a460b4b3d0a7 609e5463648c 7353da114ab2 fe7b62f74c09
	I0916 04:10:44.027209    4792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 04:10:44.033431    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:10:44.036194    4792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:10:44.036199    4792 kubeadm.go:157] found existing configuration files:
	
	I0916 04:10:44.036227    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf
	I0916 04:10:44.038709    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:10:44.038745    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:10:44.041646    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf
	I0916 04:10:44.044089    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:10:44.044112    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:10:44.046735    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf
	I0916 04:10:44.049764    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:10:44.049790    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:10:44.052288    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf
	I0916 04:10:44.054773    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:10:44.054799    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:10:44.057705    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:10:44.060161    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.082455    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.543818    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.679302    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.702248    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.723416    4792 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:10:44.723497    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:46.236524    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:46.236631    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:46.254140    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:46.254233    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:46.265546    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:46.265634    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:46.278403    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:46.278486    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:46.289056    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:46.289139    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:46.299852    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:46.299931    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:46.310766    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:46.310847    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:46.323345    4655 logs.go:276] 0 containers: []
	W0916 04:10:46.323357    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:46.323427    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:46.334332    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:46.334348    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:46.334354    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:46.347830    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:46.347840    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:46.359825    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:46.359837    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:46.396988    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:46.396996    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:46.414541    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:46.414550    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:46.437628    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:46.437636    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:46.449252    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:46.449266    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:46.487286    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:46.487298    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:46.503535    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:46.503546    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:46.523514    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:46.523525    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:46.535968    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:46.535979    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:46.554543    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:46.554554    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:46.566356    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:46.566366    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:46.582000    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:46.582014    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:46.593933    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:46.593950    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:46.605906    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:46.605919    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:46.610296    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:46.610303    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:45.225626    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:45.725541    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:45.729808    4792 api_server.go:72] duration metric: took 1.006414458s to wait for apiserver process to appear ...
	I0916 04:10:45.729824    4792 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:10:45.729833    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:49.127988    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:50.731987    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:50.732108    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:54.130076    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:54.130320    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:10:54.150603    4655 logs.go:276] 2 containers: [6e10ade08bbc f96872a76692]
	I0916 04:10:54.150717    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:10:54.165222    4655 logs.go:276] 2 containers: [05bfeea67744 097738ff3821]
	I0916 04:10:54.165312    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:10:54.176823    4655 logs.go:276] 1 containers: [e11e4df1f883]
	I0916 04:10:54.176911    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:10:54.188093    4655 logs.go:276] 2 containers: [235c08ec6496 cc2fad651b22]
	I0916 04:10:54.188182    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:10:54.198775    4655 logs.go:276] 1 containers: [1ae068289404]
	I0916 04:10:54.198850    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:10:54.209471    4655 logs.go:276] 2 containers: [309ff69f986d 11b972b52433]
	I0916 04:10:54.209551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:10:54.223825    4655 logs.go:276] 0 containers: []
	W0916 04:10:54.223841    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:10:54.223912    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:10:54.238524    4655 logs.go:276] 2 containers: [cd2986a75a6f 4e73ffbfc80e]
	I0916 04:10:54.238541    4655 logs.go:123] Gathering logs for etcd [097738ff3821] ...
	I0916 04:10:54.238547    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097738ff3821"
	I0916 04:10:54.251308    4655 logs.go:123] Gathering logs for kube-controller-manager [309ff69f986d] ...
	I0916 04:10:54.251318    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309ff69f986d"
	I0916 04:10:54.268324    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:10:54.268334    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:10:54.281329    4655 logs.go:123] Gathering logs for coredns [e11e4df1f883] ...
	I0916 04:10:54.281339    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e11e4df1f883"
	I0916 04:10:54.292767    4655 logs.go:123] Gathering logs for kube-scheduler [235c08ec6496] ...
	I0916 04:10:54.292779    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 235c08ec6496"
	I0916 04:10:54.304822    4655 logs.go:123] Gathering logs for kube-proxy [1ae068289404] ...
	I0916 04:10:54.304835    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ae068289404"
	I0916 04:10:54.316929    4655 logs.go:123] Gathering logs for storage-provisioner [cd2986a75a6f] ...
	I0916 04:10:54.316940    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2986a75a6f"
	I0916 04:10:54.329342    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:10:54.329353    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:10:54.333675    4655 logs.go:123] Gathering logs for kube-apiserver [f96872a76692] ...
	I0916 04:10:54.333681    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96872a76692"
	I0916 04:10:54.345497    4655 logs.go:123] Gathering logs for etcd [05bfeea67744] ...
	I0916 04:10:54.345507    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bfeea67744"
	I0916 04:10:54.359331    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:10:54.359340    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:10:54.396095    4655 logs.go:123] Gathering logs for kube-apiserver [6e10ade08bbc] ...
	I0916 04:10:54.396104    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e10ade08bbc"
	I0916 04:10:54.410530    4655 logs.go:123] Gathering logs for kube-controller-manager [11b972b52433] ...
	I0916 04:10:54.410541    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b972b52433"
	I0916 04:10:54.421986    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:10:54.421998    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:10:54.445561    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:10:54.445570    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:10:54.479365    4655 logs.go:123] Gathering logs for kube-scheduler [cc2fad651b22] ...
	I0916 04:10:54.479374    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2fad651b22"
	I0916 04:10:54.491062    4655 logs.go:123] Gathering logs for storage-provisioner [4e73ffbfc80e] ...
	I0916 04:10:54.491074    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e73ffbfc80e"
	I0916 04:10:57.002520    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:55.732879    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:55.732903    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:02.004865    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:02.004928    4655 kubeadm.go:597] duration metric: took 4m4.27247075s to restartPrimaryControlPlane
	W0916 04:11:02.004992    4655 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 04:11:02.005015    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 04:11:03.013524    4655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008518667s)
	I0916 04:11:03.013601    4655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 04:11:03.018580    4655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:11:03.021491    4655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:11:03.024421    4655 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:11:03.024427    4655 kubeadm.go:157] found existing configuration files:
	
	I0916 04:11:03.024462    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf
	I0916 04:11:03.027415    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:11:03.027444    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:11:03.029944    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf
	I0916 04:11:03.032303    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:11:03.032328    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:11:03.035067    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf
	I0916 04:11:03.037577    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:11:03.037624    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:11:03.040690    4655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf
	I0916 04:11:03.043937    4655 kubeadm.go:163] "https://control-plane.minikube.internal:50297" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50297 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:11:03.044036    4655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:11:03.047630    4655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 04:11:03.066573    4655 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 04:11:03.066679    4655 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 04:11:03.116708    4655 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 04:11:03.116769    4655 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 04:11:03.116816    4655 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 04:11:03.168358    4655 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 04:11:03.172486    4655 out.go:235]   - Generating certificates and keys ...
	I0916 04:11:03.172524    4655 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 04:11:03.172560    4655 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 04:11:03.172600    4655 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 04:11:03.172649    4655 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 04:11:03.172685    4655 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 04:11:03.172728    4655 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 04:11:03.172770    4655 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 04:11:03.172802    4655 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 04:11:03.172846    4655 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 04:11:03.172886    4655 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 04:11:03.172909    4655 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 04:11:03.172942    4655 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 04:11:03.307629    4655 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 04:11:03.408377    4655 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 04:11:03.518888    4655 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 04:11:03.767888    4655 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 04:11:03.799260    4655 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 04:11:03.799681    4655 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 04:11:03.799720    4655 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 04:11:03.869356    4655 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 04:11:00.733337    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:00.733380    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:03.872797    4655 out.go:235]   - Booting up control plane ...
	I0916 04:11:03.872849    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 04:11:03.872886    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 04:11:03.872920    4655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 04:11:03.873337    4655 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 04:11:03.873416    4655 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 04:11:08.375116    4655 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501741 seconds
	I0916 04:11:08.375193    4655 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 04:11:08.379792    4655 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 04:11:08.890669    4655 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 04:11:08.890785    4655 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-588000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 04:11:09.397277    4655 kubeadm.go:310] [bootstrap-token] Using token: boxq8t.ye2mb6w3uyb5n055
	I0916 04:11:09.400922    4655 out.go:235]   - Configuring RBAC rules ...
	I0916 04:11:09.400992    4655 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 04:11:09.401045    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 04:11:09.404600    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 04:11:09.405680    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 04:11:09.406607    4655 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 04:11:09.407624    4655 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 04:11:09.410846    4655 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 04:11:09.584092    4655 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 04:11:09.802137    4655 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 04:11:09.802668    4655 kubeadm.go:310] 
	I0916 04:11:09.802705    4655 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 04:11:09.802715    4655 kubeadm.go:310] 
	I0916 04:11:09.802778    4655 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 04:11:09.802783    4655 kubeadm.go:310] 
	I0916 04:11:09.802799    4655 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 04:11:09.802837    4655 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 04:11:09.802870    4655 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 04:11:09.802875    4655 kubeadm.go:310] 
	I0916 04:11:09.802915    4655 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 04:11:09.802920    4655 kubeadm.go:310] 
	I0916 04:11:09.802948    4655 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 04:11:09.802952    4655 kubeadm.go:310] 
	I0916 04:11:09.802988    4655 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 04:11:09.803037    4655 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 04:11:09.803092    4655 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 04:11:09.803097    4655 kubeadm.go:310] 
	I0916 04:11:09.803145    4655 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 04:11:09.803210    4655 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 04:11:09.803215    4655 kubeadm.go:310] 
	I0916 04:11:09.803276    4655 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token boxq8t.ye2mb6w3uyb5n055 \
	I0916 04:11:09.803347    4655 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 \
	I0916 04:11:09.803362    4655 kubeadm.go:310] 	--control-plane 
	I0916 04:11:09.803368    4655 kubeadm.go:310] 
	I0916 04:11:09.803419    4655 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 04:11:09.803423    4655 kubeadm.go:310] 
	I0916 04:11:09.803467    4655 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token boxq8t.ye2mb6w3uyb5n055 \
	I0916 04:11:09.803530    4655 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 
	I0916 04:11:09.803613    4655 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 04:11:09.803622    4655 cni.go:84] Creating CNI manager for ""
	I0916 04:11:09.803632    4655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:11:09.807395    4655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 04:11:05.734105    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:05.734130    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:09.811299    4655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 04:11:09.815828    4655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 04:11:09.821425    4655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 04:11:09.821540    4655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-588000 minikube.k8s.io/updated_at=2024_09_16T04_11_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=running-upgrade-588000 minikube.k8s.io/primary=true
	I0916 04:11:09.821543    4655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 04:11:09.864069    4655 kubeadm.go:1113] duration metric: took 42.590084ms to wait for elevateKubeSystemPrivileges
	I0916 04:11:09.864091    4655 ops.go:34] apiserver oom_adj: -16
	I0916 04:11:09.867820    4655 kubeadm.go:394] duration metric: took 4m12.157761792s to StartCluster
	I0916 04:11:09.867834    4655 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:11:09.867916    4655 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:11:09.868347    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:11:09.868547    4655 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:11:09.868592    4655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 04:11:09.868631    4655 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-588000"
	I0916 04:11:09.868671    4655 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-588000"
	W0916 04:11:09.868678    4655 addons.go:243] addon storage-provisioner should already be in state true
	I0916 04:11:09.868688    4655 host.go:66] Checking if "running-upgrade-588000" exists ...
	I0916 04:11:09.868634    4655 config.go:182] Loaded profile config "running-upgrade-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:11:09.868656    4655 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-588000"
	I0916 04:11:09.868733    4655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-588000"
	I0916 04:11:09.869636    4655 kapi.go:59] client config for running-upgrade-588000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/running-upgrade-588000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b55800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:11:09.869767    4655 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-588000"
	W0916 04:11:09.869772    4655 addons.go:243] addon default-storageclass should already be in state true
	I0916 04:11:09.869780    4655 host.go:66] Checking if "running-upgrade-588000" exists ...
	I0916 04:11:09.872318    4655 out.go:177] * Verifying Kubernetes components...
	I0916 04:11:09.872739    4655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 04:11:09.873703    4655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 04:11:09.873712    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:11:09.877418    4655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:11:09.881315    4655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:11:09.885428    4655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:11:09.885466    4655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 04:11:09.885500    4655 sshutil.go:53] new ssh client: &{IP:localhost Port:50265 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/running-upgrade-588000/id_rsa Username:docker}
	I0916 04:11:09.956659    4655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:11:09.961911    4655 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:11:09.961960    4655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:11:09.965753    4655 api_server.go:72] duration metric: took 97.197459ms to wait for apiserver process to appear ...
	I0916 04:11:09.965760    4655 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:11:09.965767    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:09.990082    4655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 04:11:10.015639    4655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:11:10.315917    4655 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 04:11:10.315930    4655 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 04:11:10.734889    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:10.734925    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:14.966367    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:14.966417    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:15.735965    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:15.736015    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:19.967647    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:19.967678    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:20.737495    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:20.737538    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:24.967871    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:24.967895    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:25.739286    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:25.739314    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:29.968125    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:29.968160    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:30.741470    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:30.741522    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:34.968543    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:34.968583    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:35.743775    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:35.743824    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:39.969137    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:39.969169    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 04:11:40.317719    4655 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 04:11:40.321929    4655 out.go:177] * Enabled addons: storage-provisioner
	I0916 04:11:40.329882    4655 addons.go:510] duration metric: took 30.461913291s for enable addons: enabled=[storage-provisioner]
	I0916 04:11:40.745931    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:40.745974    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:44.969901    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:44.969963    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:45.748092    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:45.748261    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:11:45.761000    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:11:45.761088    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:11:45.771684    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:11:45.771770    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:11:45.782494    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:11:45.782572    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:11:45.793003    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:11:45.793084    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:11:45.810395    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:11:45.810472    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:11:45.821657    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:11:45.821744    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:11:45.831968    4792 logs.go:276] 0 containers: []
	W0916 04:11:45.831980    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:11:45.832052    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:11:45.848798    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:11:45.848817    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:11:45.848824    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:11:45.889223    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:11:45.889237    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:11:45.904041    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:11:45.904054    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:11:45.915681    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:11:45.915693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:11:45.926809    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:11:45.926822    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:11:45.939647    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:11:45.939658    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:11:46.034820    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:11:46.034835    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:11:46.046326    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:11:46.046357    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:11:46.070178    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:11:46.070185    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:11:46.084035    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:11:46.084044    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:11:46.098207    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:11:46.098215    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:11:46.120852    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:11:46.120864    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:11:46.133052    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:11:46.133065    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:11:46.145968    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:11:46.145978    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:11:46.150850    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:11:46.150857    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:11:46.194477    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:11:46.194489    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:11:46.206044    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:11:46.206055    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:11:48.724436    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:49.971139    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:49.971218    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:53.726729    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:53.726900    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:11:53.748129    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:11:53.748231    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:11:53.761558    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:11:53.761649    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:11:53.771691    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:11:53.771784    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:11:53.782195    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:11:53.782292    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:11:53.792953    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:11:53.793039    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:11:53.807829    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:11:53.807909    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:11:53.819341    4792 logs.go:276] 0 containers: []
	W0916 04:11:53.819353    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:11:53.819423    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:11:53.830063    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:11:53.830081    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:11:53.830087    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:11:53.841518    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:11:53.841528    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:11:53.855237    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:11:53.855246    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:11:53.877790    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:11:53.877802    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:11:53.898243    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:11:53.898253    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:11:53.924521    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:11:53.924533    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:11:53.960889    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:11:53.960901    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:11:53.979596    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:11:53.979606    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:11:53.991918    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:11:53.991928    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:11:54.030601    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:11:54.030613    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:11:54.043459    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:11:54.043469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:11:54.055683    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:11:54.055693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:11:54.072129    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:11:54.072140    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:11:54.084876    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:11:54.084890    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:11:54.096871    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:11:54.096885    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:11:54.101481    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:11:54.101490    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:11:54.139849    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:11:54.139860    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:11:54.972774    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:54.972852    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:56.655930    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:59.974620    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:59.974643    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:01.658292    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:01.658513    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:01.678730    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:01.678843    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:01.693133    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:01.693219    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:01.709430    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:01.709527    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:01.719853    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:01.719936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:01.730770    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:01.730850    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:01.741628    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:01.741714    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:01.752087    4792 logs.go:276] 0 containers: []
	W0916 04:12:01.752101    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:01.752168    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:01.762314    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:01.762332    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:01.762338    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:01.774009    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:01.774020    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:01.785711    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:01.785725    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:01.802890    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:01.802899    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:01.841436    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:01.841445    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:01.862677    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:01.862692    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:01.875126    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:01.875140    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:01.888933    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:01.888944    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:01.925539    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:01.925552    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:01.940174    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:01.940183    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:01.952531    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:01.952544    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:01.964293    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:01.964303    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:01.975399    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:01.975410    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:01.987038    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:01.987049    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:01.991368    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:01.991374    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:02.029151    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:02.029163    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:02.055072    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:02.055084    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:04.570939    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:04.976740    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:04.976776    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:09.572873    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:09.572997    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:09.583649    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:09.583734    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:09.594322    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:09.594408    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:09.604576    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:09.604655    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:09.615359    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:09.615439    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:09.625954    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:09.626038    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:09.636601    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:09.636692    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:09.647036    4792 logs.go:276] 0 containers: []
	W0916 04:12:09.647049    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:09.647124    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:09.657846    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:09.657863    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:09.657869    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:09.678132    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:09.678143    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:09.715111    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:09.715128    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:09.756696    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:09.756713    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:09.768834    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:09.768850    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:09.781757    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:09.781768    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:09.793351    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:09.793362    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:09.978413    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:09.978508    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:10.006827    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:10.006917    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:10.020818    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:10.020896    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:10.032167    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:10.032252    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:10.042487    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:10.042562    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:10.052459    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:10.052551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:10.062891    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:10.062970    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:10.073373    4655 logs.go:276] 0 containers: []
	W0916 04:12:10.073383    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:10.073444    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:10.083937    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:10.083952    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:10.083957    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:10.096607    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:10.096620    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:10.107927    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:10.107938    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:10.112639    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:10.112646    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:10.127263    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:10.127273    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:10.140923    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:10.140938    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:10.152645    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:10.152656    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:10.170741    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:10.170751    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:10.194922    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:10.194929    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:10.233795    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:10.233805    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:10.269293    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:10.269304    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:10.283814    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:10.283823    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:10.296514    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:10.296529    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:12.813095    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:09.837439    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:09.837467    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:09.852197    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:09.852209    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:09.866518    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:09.866528    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:09.877932    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:09.877946    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:09.888937    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:09.888948    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:09.901152    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:09.901163    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:09.925362    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:09.925369    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:09.936980    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:09.936993    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:09.941468    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:09.941476    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:09.955594    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:09.955609    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:12.479963    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:17.815541    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:17.815641    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:17.827252    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:17.827341    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:17.839278    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:17.839368    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:17.850709    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:17.850792    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:17.861721    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:17.861807    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:17.873333    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:17.873418    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:17.884821    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:17.884906    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:17.895952    4655 logs.go:276] 0 containers: []
	W0916 04:12:17.895962    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:17.896032    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:17.906522    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:17.906541    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:17.906546    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:17.918579    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:17.918590    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:17.923164    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:17.923174    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:17.958898    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:17.958913    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:17.972778    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:17.972790    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:17.994146    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:17.994156    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:18.006274    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:18.006285    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:18.033663    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:18.033675    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:18.072591    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:18.072603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:18.087355    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:18.087370    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:18.098856    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:18.098869    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:18.110242    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:18.110254    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:18.134794    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:18.134803    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:17.481676    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:17.481890    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:17.498274    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:17.498370    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:17.510539    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:17.510627    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:17.521875    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:17.521955    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:17.536396    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:17.536480    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:17.546600    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:17.546676    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:17.557286    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:17.557369    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:17.567885    4792 logs.go:276] 0 containers: []
	W0916 04:12:17.567895    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:17.567956    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:17.578564    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:17.578584    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:17.578589    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:17.592625    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:17.592638    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:17.605839    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:17.605851    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:17.618361    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:17.618372    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:17.656569    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:17.656581    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:17.661364    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:17.661374    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:17.672083    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:17.672094    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:17.692329    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:17.692340    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:17.703886    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:17.703895    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:17.729036    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:17.729043    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:17.768102    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:17.768117    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:17.783953    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:17.783962    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:17.796590    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:17.796600    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:17.834867    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:17.834884    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:17.853039    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:17.853049    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:17.865870    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:17.865881    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:17.878973    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:17.878986    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:20.648364    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:20.396674    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:25.648594    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:25.648686    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:25.660140    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:25.660227    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:25.671667    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:25.671751    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:25.682421    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:25.682505    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:25.694095    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:25.694179    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:25.705415    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:25.705502    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:25.717719    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:25.717799    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:25.729037    4655 logs.go:276] 0 containers: []
	W0916 04:12:25.729049    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:25.729129    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:25.740337    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:25.740352    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:25.740356    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:25.753804    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:25.753816    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:25.778412    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:25.778423    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:25.791078    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:25.791091    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:25.805894    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:25.805902    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:25.822203    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:25.822214    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:25.834441    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:25.834456    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:25.848476    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:25.848487    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:25.866456    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:25.866472    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:25.877916    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:25.877927    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:25.895886    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:25.895897    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:25.934621    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:25.934631    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:25.939094    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:25.939100    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:25.398916    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:25.399164    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:25.418882    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:25.418984    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:25.433094    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:25.433184    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:25.445134    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:25.445218    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:25.456106    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:25.456195    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:25.466293    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:25.466374    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:25.479454    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:25.479542    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:25.489740    4792 logs.go:276] 0 containers: []
	W0916 04:12:25.489753    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:25.489830    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:25.500428    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:25.500446    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:25.500451    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:25.539346    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:25.539354    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:25.581930    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:25.581941    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:25.596580    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:25.596591    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:25.608730    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:25.608744    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:25.620527    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:25.620538    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:25.633408    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:25.633420    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:25.645484    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:25.645499    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:25.661017    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:25.661027    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:25.683219    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:25.683227    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:25.695113    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:25.695122    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:25.708942    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:25.708953    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:25.725530    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:25.725549    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:25.738374    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:25.738390    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:25.743077    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:25.743095    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:25.778883    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:25.778897    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:25.803174    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:25.803188    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:28.331783    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:28.475196    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:33.334042    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:33.334282    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:33.350832    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:33.350936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:33.363877    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:33.363969    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:33.374661    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:33.374741    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:33.384993    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:33.385089    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:33.395674    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:33.395755    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:33.406161    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:33.406242    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:33.416766    4792 logs.go:276] 0 containers: []
	W0916 04:12:33.416778    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:33.416845    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:33.428483    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:33.428501    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:33.428506    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:33.433721    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:33.433729    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:33.446718    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:33.446732    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:33.485927    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:33.485939    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:33.510936    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:33.510949    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:33.528550    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:33.528559    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:33.541022    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:33.541035    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:33.553904    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:33.553912    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:33.593278    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:33.593289    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:33.615798    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:33.615809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:33.632224    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:33.632237    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:33.654483    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:33.654490    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:33.672882    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:33.672897    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:33.686618    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:33.686631    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:33.726812    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:33.726832    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:33.743667    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:33.743676    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:33.757102    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:33.757113    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:33.477347    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:33.477447    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:33.489384    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:33.489473    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:33.501077    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:33.501158    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:33.514676    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:33.514760    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:33.526585    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:33.526673    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:33.538439    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:33.538531    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:33.553565    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:33.553655    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:33.566891    4655 logs.go:276] 0 containers: []
	W0916 04:12:33.566903    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:33.566978    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:33.578356    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:33.578375    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:33.578381    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:33.596755    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:33.596769    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:33.609697    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:33.609709    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:33.649007    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:33.649018    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:33.654150    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:33.654156    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:33.711492    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:33.711505    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:33.727630    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:33.727638    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:33.740310    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:33.740321    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:33.765709    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:33.765724    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:33.781589    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:33.781600    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:33.796809    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:33.796823    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:33.814674    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:33.814687    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:33.826860    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:33.826876    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:36.343546    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:36.283799    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:41.345607    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:41.345708    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:41.359239    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:41.359356    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:41.370695    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:41.370780    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:41.391733    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:41.391810    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:41.402897    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:41.402975    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:41.414726    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:41.414806    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:41.429838    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:41.429921    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:41.442892    4655 logs.go:276] 0 containers: []
	W0916 04:12:41.442900    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:41.442969    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:41.455510    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:41.455525    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:41.455530    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:41.495562    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:41.495573    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:41.509659    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:41.509671    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:41.522678    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:41.522687    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:41.538494    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:41.538505    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:41.557264    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:41.557276    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:41.570126    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:41.570138    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:41.574873    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:41.574883    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:41.613311    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:41.613321    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:41.628959    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:41.628971    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:41.641716    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:41.641729    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:41.654373    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:41.654386    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:41.666305    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:41.666321    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:41.286105    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:41.286496    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:41.318396    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:41.318552    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:41.339224    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:41.339333    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:41.353304    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:41.353389    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:41.365733    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:41.365821    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:41.377171    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:41.377260    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:41.388277    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:41.388363    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:41.399304    4792 logs.go:276] 0 containers: []
	W0916 04:12:41.399316    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:41.399395    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:41.411438    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:41.411455    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:41.411462    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:41.426390    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:41.426402    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:41.441784    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:41.441801    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:41.464101    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:41.464112    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:41.475959    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:41.475970    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:41.516052    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:41.516066    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:41.520588    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:41.520601    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:41.558238    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:41.558248    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:41.570688    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:41.570698    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:41.596871    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:41.596884    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:41.610808    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:41.610821    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:41.637132    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:41.637147    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:41.682324    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:41.682336    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:41.697027    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:41.697037    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:41.708755    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:41.708766    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:41.719951    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:41.719962    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:41.735979    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:41.735990    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:44.249231    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:44.193276    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:49.250308    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:49.250372    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:49.261796    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:49.261849    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:49.273582    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:49.273635    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:49.285371    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:49.285442    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:49.296853    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:49.296943    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:49.308493    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:49.308577    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:49.320310    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:49.320394    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:49.331278    4792 logs.go:276] 0 containers: []
	W0916 04:12:49.331289    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:49.331361    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:49.342023    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:49.342042    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:49.342048    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:49.354623    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:49.354639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:49.368581    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:49.368593    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:49.406896    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:49.406908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:49.422295    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:49.422312    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:49.445405    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:49.445423    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:49.457990    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:49.458004    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:49.473990    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:49.474002    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:49.513185    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:49.513196    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:49.526262    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:49.526273    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:49.538782    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:49.538794    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:49.564632    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:49.564644    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:49.581708    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:49.581719    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:49.596689    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:49.596700    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:49.608766    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:49.608776    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:49.647269    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:49.647282    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:49.651652    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:49.651658    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:49.195578    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:49.195835    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:49.214246    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:49.214350    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:49.227764    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:49.227861    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:49.239829    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:49.239919    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:49.250149    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:49.250240    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:49.260943    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:49.261031    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:49.272352    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:49.272433    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:49.283620    4655 logs.go:276] 0 containers: []
	W0916 04:12:49.283642    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:49.283724    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:49.294682    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:49.294698    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:49.294704    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:49.320187    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:49.320200    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:49.347142    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:49.347155    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:49.360053    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:49.360069    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:49.398980    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:49.398992    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:49.412126    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:49.412138    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:49.424659    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:49.424669    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:49.438229    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:49.438240    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:49.454379    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:49.454396    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:49.466815    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:49.466831    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:49.506441    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:49.506453    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:49.511691    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:49.511698    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:49.536990    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:49.537001    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:52.054294    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:52.168020    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:57.056447    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:57.056654    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:57.073139    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:12:57.073237    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:57.086275    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:12:57.086367    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:57.097844    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:12:57.097933    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:57.108420    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:12:57.108507    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:57.119436    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:12:57.119517    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:57.130508    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:12:57.130590    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:57.141151    4655 logs.go:276] 0 containers: []
	W0916 04:12:57.141163    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:57.141232    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:57.154744    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:12:57.154762    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:12:57.154768    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:12:57.169761    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:12:57.169771    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:12:57.188200    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:12:57.188211    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:12:57.206874    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:57.206886    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:57.257459    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:12:57.257471    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:12:57.272919    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:12:57.272932    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:12:57.285335    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:12:57.285346    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:12:57.298545    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:12:57.298558    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:12:57.311521    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:57.311536    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:57.338615    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:12:57.338626    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:57.357030    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:57.357041    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:57.399226    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:57.399239    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:57.404636    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:12:57.404646    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:12:57.170189    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:57.170296    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:57.181600    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:57.181684    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:57.193042    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:57.193141    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:57.210744    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:57.210832    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:57.222452    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:57.222541    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:57.234034    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:57.234125    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:57.245132    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:57.245213    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:57.256628    4792 logs.go:276] 0 containers: []
	W0916 04:12:57.256642    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:57.256711    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:57.268124    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:57.268145    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:57.268151    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:57.305184    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:57.305197    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:57.320803    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:57.320814    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:57.338013    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:57.338025    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:57.350655    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:57.350668    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:57.380969    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:57.380980    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:57.394024    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:57.394035    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:57.406058    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:57.406067    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:57.418566    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:57.418577    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:57.440700    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:57.440712    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:57.454136    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:57.454149    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:57.492222    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:57.492233    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:57.496275    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:57.496282    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:57.534364    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:57.534377    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:57.545633    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:57.545644    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:57.557544    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:57.557555    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:57.578898    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:57.578915    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:59.922382    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:00.105337    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:04.923381    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:04.923556    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:04.937858    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:04.937951    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:04.949774    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:04.949862    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:04.961689    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:04.961780    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:04.972456    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:04.972534    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:04.984644    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:04.984723    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:04.995129    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:04.995217    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:05.006078    4655 logs.go:276] 0 containers: []
	W0916 04:13:05.006091    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:05.006168    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:05.017124    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:05.017141    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:05.017148    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:05.054785    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:05.054800    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:05.069559    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:05.069569    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:05.082009    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:05.082021    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:05.096731    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:05.096741    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:05.108405    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:05.108416    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:05.134775    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:05.134787    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:05.147800    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:05.147833    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:05.188492    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:05.188503    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:05.204248    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:05.204261    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:05.217091    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:05.217104    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:05.236004    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:05.236018    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:05.250062    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:05.250074    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:07.756955    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:05.107470    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:05.107577    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:05.122702    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:05.122788    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:05.134030    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:05.134119    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:05.145768    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:05.145857    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:05.163606    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:05.163692    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:05.174689    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:05.174768    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:05.186054    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:05.186136    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:05.197639    4792 logs.go:276] 0 containers: []
	W0916 04:13:05.197654    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:05.197723    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:05.209557    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:05.209576    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:05.209582    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:05.224822    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:05.224835    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:05.236971    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:05.236979    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:05.259161    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:05.259171    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:05.271465    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:05.271474    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:05.282673    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:05.282684    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:05.293585    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:05.293597    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:05.298034    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:05.298040    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:05.312518    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:05.312531    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:05.326127    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:05.326141    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:05.338250    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:05.338261    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:05.361492    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:05.361499    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:05.395363    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:05.395378    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:05.434184    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:05.434199    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:05.451905    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:05.451917    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:05.465025    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:05.465038    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:05.476548    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:05.476565    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:08.017974    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:12.759172    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:12.759442    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:12.775584    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:12.775685    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:12.788381    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:12.788455    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:12.798980    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:12.799067    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:12.809304    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:12.809374    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:12.819931    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:12.820019    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:12.830452    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:12.830526    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:12.840506    4655 logs.go:276] 0 containers: []
	W0916 04:13:12.840518    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:12.840591    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:12.851261    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:12.851276    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:12.851281    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:12.866185    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:12.866195    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:12.878097    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:12.878111    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:12.890387    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:12.890396    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:12.916080    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:12.916093    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:12.921022    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:12.921028    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:12.974558    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:12.974570    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:12.988710    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:12.988724    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:13.001323    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:13.001335    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:13.019418    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:13.019434    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:13.031928    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:13.031935    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:13.072670    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:13.072688    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:13.090071    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:13.090089    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:13.020054    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:13.020146    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:13.031835    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:13.031922    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:13.043329    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:13.043415    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:13.054161    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:13.054248    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:13.068462    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:13.068546    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:13.079839    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:13.079924    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:13.093155    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:13.093240    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:13.104492    4792 logs.go:276] 0 containers: []
	W0916 04:13:13.104503    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:13.104575    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:13.115464    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:13.115484    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:13.115490    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:13.151204    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:13.151219    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:13.166415    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:13.166427    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:13.187838    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:13.187850    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:13.200211    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:13.200221    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:13.223294    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:13.223304    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:13.234829    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:13.234840    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:13.254065    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:13.254077    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:13.267519    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:13.267527    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:13.278770    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:13.278781    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:13.290508    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:13.290520    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:13.326473    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:13.326481    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:13.365457    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:13.365469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:13.376511    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:13.376522    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:13.405128    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:13.405142    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:13.416384    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:13.416399    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:13.420803    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:13.420809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:15.604938    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:15.934153    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:20.607192    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:20.607472    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:20.626010    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:20.626125    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:20.640506    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:20.640585    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:20.652687    4655 logs.go:276] 2 containers: [8869c7622640 e3ad1db7ced5]
	I0916 04:13:20.652776    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:20.663565    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:20.663640    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:20.674195    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:20.674286    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:20.684387    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:20.684470    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:20.694199    4655 logs.go:276] 0 containers: []
	W0916 04:13:20.694209    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:20.694271    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:20.704511    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:20.704527    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:20.704532    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:20.718952    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:20.718962    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:20.740869    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:20.740880    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:20.753450    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:20.753460    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:20.768592    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:20.768603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:20.786138    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:20.786147    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:20.797991    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:20.798001    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:20.802534    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:20.802540    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:20.838253    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:20.838264    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:20.861463    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:20.861470    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:20.881122    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:20.881133    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:20.892550    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:20.892560    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:20.929712    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:20.929720    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:20.936422    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:20.936517    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:20.947568    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:20.947646    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:20.958011    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:20.958099    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:20.972702    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:20.972785    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:20.988068    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:20.988155    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:21.002554    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:21.002641    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:21.014562    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:21.014642    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:21.028922    4792 logs.go:276] 0 containers: []
	W0916 04:13:21.028935    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:21.029013    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:21.039597    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:21.039616    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:21.039622    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:21.054243    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:21.054253    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:21.065899    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:21.065910    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:21.077219    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:21.077230    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:21.112917    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:21.112925    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:21.147863    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:21.147874    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:21.162504    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:21.162515    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:21.174482    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:21.174493    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:21.178593    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:21.178602    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:21.215990    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:21.216004    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:21.227899    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:21.227910    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:21.251272    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:21.251281    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:21.265181    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:21.265192    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:21.286014    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:21.286023    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:21.301354    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:21.301369    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:21.313191    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:21.313204    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:21.330402    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:21.330413    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:23.843785    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:23.443502    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:28.845970    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:28.846154    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:28.857361    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:28.857444    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:28.867512    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:28.867595    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:28.877414    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:28.877504    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:28.887876    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:28.887954    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:28.898033    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:28.898106    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:28.909311    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:28.909383    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:28.919521    4792 logs.go:276] 0 containers: []
	W0916 04:13:28.919533    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:28.919606    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:28.934792    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:28.934808    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:28.934814    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:28.973304    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:28.973317    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:29.013953    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:29.013972    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:29.026415    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:29.026427    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:29.039231    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:29.039242    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:29.077340    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:29.077351    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:29.091840    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:29.091851    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:29.105510    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:29.105519    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:29.126637    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:29.126649    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:29.138651    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:29.138662    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:29.155898    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:29.155913    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:29.167146    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:29.167156    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:29.179926    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:29.179935    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:29.191850    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:29.191860    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:29.214847    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:29.214863    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:29.218970    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:29.218977    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:29.232856    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:29.232866    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:28.445683    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:28.446052    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:28.471905    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:28.472034    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:28.489358    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:28.489473    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:28.503038    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:28.503130    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:28.515184    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:28.515270    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:28.525330    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:28.525407    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:28.536365    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:28.536443    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:28.546215    4655 logs.go:276] 0 containers: []
	W0916 04:13:28.546226    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:28.546295    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:28.558405    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:28.558425    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:28.558431    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:28.563281    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:28.563287    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:28.598602    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:28.598609    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:28.609593    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:28.609603    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:28.625570    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:28.625580    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:28.649150    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:28.649160    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:28.663032    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:28.663042    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:28.674468    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:28.674478    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:28.692018    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:28.692028    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:28.704238    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:28.704249    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:28.739292    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:28.739303    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:28.768978    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:28.768992    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:28.780293    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:28.780304    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:28.791878    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:28.791887    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:28.814112    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:28.814122    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:31.327593    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:31.747316    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:36.329597    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:36.329766    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:36.341171    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:36.341252    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:36.351963    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:36.352053    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:36.366594    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:36.366679    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:36.376838    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:36.376932    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:36.387622    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:36.387695    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:36.402496    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:36.402568    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:36.412821    4655 logs.go:276] 0 containers: []
	W0916 04:13:36.412832    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:36.412903    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:36.423404    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:36.423420    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:36.423425    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:36.440360    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:36.440369    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:36.477520    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:36.477532    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:36.489131    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:36.489141    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:36.508926    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:36.508941    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:36.514922    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:36.514930    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:36.530679    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:36.530690    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:36.541896    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:36.541906    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:36.577910    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:36.577918    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:36.592220    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:36.592231    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:36.603696    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:36.603706    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:36.628189    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:36.628201    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:36.639953    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:36.639965    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:36.655441    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:36.655451    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:36.667249    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:36.667260    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:36.748359    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:36.748529    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:36.759040    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:36.759127    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:36.779604    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:36.779687    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:36.790030    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:36.790106    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:36.800626    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:36.800710    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:36.810942    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:36.811016    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:36.821537    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:36.821608    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:36.832023    4792 logs.go:276] 0 containers: []
	W0916 04:13:36.832034    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:36.832097    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:36.842480    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:36.842497    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:36.842502    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:36.853504    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:36.853516    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:36.865141    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:36.865152    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:36.900052    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:36.900064    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:36.912666    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:36.912676    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:36.930811    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:36.930825    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:36.942516    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:36.942532    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:36.978248    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:36.978255    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:36.982408    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:36.982415    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:36.996763    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:36.996775    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:37.014093    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:37.014106    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:37.026290    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:37.026306    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:37.043489    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:37.043499    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:37.066121    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:37.066129    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:37.080242    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:37.080252    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:37.117945    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:37.117955    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:37.139282    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:37.139296    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:39.653456    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:39.194035    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:44.655673    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:44.655839    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:44.667094    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:44.667175    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:44.678059    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:44.678147    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:44.689023    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:44.689107    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:44.699658    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:44.699743    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:44.715500    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:44.715584    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:44.726143    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:44.726223    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:44.736416    4792 logs.go:276] 0 containers: []
	W0916 04:13:44.736427    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:44.736494    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:44.747220    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:44.747238    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:44.747244    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:44.783862    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:44.783876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:44.804986    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:44.804999    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:44.816704    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:44.816715    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:44.196319    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:44.196453    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:44.209114    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:44.209202    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:44.220307    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:44.220396    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:44.231691    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:44.231778    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:44.242953    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:44.243033    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:44.254077    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:44.254162    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:44.264496    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:44.264573    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:44.275177    4655 logs.go:276] 0 containers: []
	W0916 04:13:44.275191    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:44.275265    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:44.286171    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:44.286188    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:44.286193    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:44.320148    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:44.320159    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:44.334181    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:44.334191    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:44.348277    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:44.348286    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:44.353207    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:44.353212    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:44.366878    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:44.366887    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:44.384497    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:44.384507    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:44.396083    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:44.396094    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:44.408101    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:44.408117    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:44.431404    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:44.431413    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:44.443692    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:44.443706    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:44.481890    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:44.481902    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:44.494259    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:44.494272    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:44.506062    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:44.506073    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:44.517987    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:44.517997    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:47.034593    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:44.830598    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:44.830613    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:44.847125    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:44.847135    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:44.858817    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:44.858828    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:44.872558    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:44.872569    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:44.884103    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:44.884117    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:44.896001    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:44.896011    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:44.899999    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:44.900008    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:44.936096    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:44.936109    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:44.976150    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:44.976164    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:44.990401    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:44.990414    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:45.015887    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:45.015908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:45.030452    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:45.030463    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:45.048774    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:45.048788    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:47.563662    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:52.036845    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:52.037082    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:52.055242    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:52.055361    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:52.069098    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:52.069189    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:52.080338    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:52.080423    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:52.090817    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:52.090898    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:52.101533    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:52.101613    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:52.114376    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:52.114454    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:52.124086    4655 logs.go:276] 0 containers: []
	W0916 04:13:52.124098    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:52.124170    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:52.134887    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:13:52.134905    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:13:52.134911    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:13:52.149915    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:13:52.149924    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:13:52.165530    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:13:52.165543    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:13:52.183216    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:13:52.183225    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:13:52.195275    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:13:52.195286    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:13:52.207169    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:52.207181    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:52.232796    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:52.232809    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:52.237529    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:52.237540    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:52.273296    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:13:52.273308    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:13:52.285654    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:13:52.285664    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:13:52.300331    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:52.300341    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:52.339452    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:13:52.339462    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:13:52.351533    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:13:52.351543    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:13:52.363087    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:13:52.363099    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:13:52.378079    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:13:52.378089    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:52.565817    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:52.565955    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:52.578035    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:52.578122    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:52.588740    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:52.588818    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:52.599299    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:52.599385    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:52.609965    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:52.610039    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:52.620925    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:52.621008    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:52.631569    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:52.631641    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:52.642027    4792 logs.go:276] 0 containers: []
	W0916 04:13:52.642038    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:52.642110    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:52.656832    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:52.656851    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:52.656857    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:52.671412    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:52.671421    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:52.685877    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:52.685887    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:52.697138    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:52.697149    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:52.718319    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:52.718330    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:52.731776    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:52.731787    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:52.770963    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:52.770973    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:52.784283    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:52.784297    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:52.795680    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:52.795693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:52.808470    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:52.808484    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:52.820692    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:52.820704    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:52.825161    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:52.825167    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:52.842795    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:52.842809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:52.854617    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:52.854628    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:52.888719    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:52.888733    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:52.935456    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:52.935469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:52.947397    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:52.947407    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:54.892341    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:55.471553    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:59.894949    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:59.895228    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:59.919076    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:13:59.919192    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:59.938007    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:13:59.938093    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:59.950763    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:13:59.950852    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:59.961721    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:13:59.961794    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:59.971954    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:13:59.972036    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:59.982594    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:13:59.982682    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:59.994133    4655 logs.go:276] 0 containers: []
	W0916 04:13:59.994145    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:59.994211    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:00.004672    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:00.004688    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:00.004695    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:00.019254    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:00.019265    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:00.044654    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:00.044665    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:00.056573    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:00.056583    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:00.068758    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:00.068770    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:00.079906    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:00.079918    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:00.095298    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:00.095308    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:00.107402    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:00.107413    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:00.148946    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:00.148954    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:00.184027    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:00.184040    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:00.195897    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:00.195907    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:00.200672    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:00.200678    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:00.214982    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:00.214996    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:00.232899    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:00.232910    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:00.244898    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:00.244910    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:02.758694    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:00.473808    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:00.473936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:00.488797    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:00.488873    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:00.499635    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:00.499721    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:00.510216    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:00.510300    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:00.521125    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:00.521210    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:00.535502    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:00.535583    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:00.546090    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:00.546169    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:00.556307    4792 logs.go:276] 0 containers: []
	W0916 04:14:00.556321    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:00.556397    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:00.566584    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:00.566603    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:00.566609    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:00.604459    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:00.604471    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:00.646182    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:00.646195    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:00.657933    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:00.657946    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:00.670851    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:00.670868    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:00.683157    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:00.683166    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:00.705194    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:00.705203    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:00.719361    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:00.719370    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:00.740374    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:00.740383    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:00.752114    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:00.752127    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:00.769807    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:00.769817    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:00.781014    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:00.781025    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:00.792040    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:00.792052    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:00.805250    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:00.805266    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:00.809753    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:00.809760    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:00.843652    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:00.843663    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:00.857388    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:00.857397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:03.373017    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:07.760581    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:07.760791    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:07.787468    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:07.787566    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:07.806680    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:07.806765    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:07.816993    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:07.817075    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:07.827950    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:07.828032    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:07.839406    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:07.839490    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:07.850397    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:07.850479    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:07.860578    4655 logs.go:276] 0 containers: []
	W0916 04:14:07.860592    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:07.860674    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:07.871404    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:07.871422    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:07.871428    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:07.907321    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:07.907334    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:07.921344    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:07.921354    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:07.938146    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:07.938157    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:07.949709    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:07.949718    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:07.966402    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:07.966415    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:07.977700    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:07.977713    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:07.992668    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:07.992680    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:08.010170    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:08.010179    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:08.035219    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:08.035228    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:08.047364    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:08.047376    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:08.064658    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:08.064667    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:08.076214    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:08.076222    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:08.113442    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:08.113451    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:08.117607    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:08.117613    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:08.375167    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:08.375291    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:08.403367    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:08.403456    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:08.415773    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:08.415863    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:08.427900    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:08.427980    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:08.438875    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:08.438957    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:08.450058    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:08.450141    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:08.460493    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:08.460578    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:08.470824    4792 logs.go:276] 0 containers: []
	W0916 04:14:08.470835    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:08.470908    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:08.481080    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:08.481099    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:08.481105    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:08.518233    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:08.518243    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:08.529422    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:08.529433    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:08.543343    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:08.543355    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:08.581578    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:08.581591    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:08.595331    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:08.595342    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:08.606714    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:08.606724    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:08.629504    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:08.629514    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:08.633995    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:08.634003    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:08.648325    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:08.648336    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:08.663162    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:08.663175    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:08.674299    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:08.674310    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:08.685787    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:08.685797    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:08.703720    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:08.703730    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:08.737850    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:08.737865    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:08.750149    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:08.750162    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:08.770780    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:08.770791    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:10.630656    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:11.285443    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:15.631435    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:15.631727    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:15.658624    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:15.658768    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:15.674720    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:15.674820    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:15.687650    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:15.687743    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:15.698638    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:15.698713    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:15.708948    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:15.709031    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:15.719814    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:15.719896    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:15.730718    4655 logs.go:276] 0 containers: []
	W0916 04:14:15.730731    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:15.730805    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:15.741525    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:15.741542    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:15.741548    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:15.746131    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:15.746138    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:15.779640    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:15.779656    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:15.791318    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:15.791329    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:15.802611    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:15.802620    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:15.822251    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:15.822264    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:15.835593    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:15.835606    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:15.854086    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:15.854097    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:15.866405    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:15.866418    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:15.877642    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:15.877652    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:15.888853    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:15.888865    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:15.925795    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:15.925804    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:15.939608    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:15.939620    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:15.957264    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:15.957274    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:15.981857    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:15.981865    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:16.287841    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:16.288037    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:16.307410    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:16.307501    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:16.319507    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:16.319600    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:16.330135    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:16.330213    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:16.340756    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:16.340841    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:16.351091    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:16.351166    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:16.361968    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:16.362045    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:16.372850    4792 logs.go:276] 0 containers: []
	W0916 04:14:16.372864    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:16.372925    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:16.388444    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:16.388464    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:16.388470    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:16.426545    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:16.426556    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:16.447624    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:16.447634    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:16.460065    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:16.460074    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:16.483408    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:16.483418    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:16.495039    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:16.495050    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:16.508188    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:16.508202    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:16.519310    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:16.519322    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:16.557142    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:16.557150    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:16.561200    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:16.561210    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:16.575257    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:16.575266    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:16.592399    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:16.592410    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:16.604077    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:16.604088    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:16.617121    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:16.617131    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:16.652505    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:16.652515    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:16.666794    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:16.666807    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:16.678996    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:16.679006    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:19.201844    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:18.494987    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:24.204557    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:24.204734    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:24.220983    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:24.221091    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:24.234155    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:24.234242    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:24.245259    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:24.245339    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:24.259616    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:24.259702    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:24.269996    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:24.270083    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:24.281137    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:24.281221    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:24.291613    4792 logs.go:276] 0 containers: []
	W0916 04:14:24.291628    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:24.291706    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:24.302279    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:24.302302    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:24.302309    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:24.313913    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:24.313928    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:24.331373    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:24.331387    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:24.342679    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:24.342689    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:24.379238    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:24.379250    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:24.393372    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:24.393386    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:24.405423    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:24.405434    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:24.419757    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:24.419769    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:24.443334    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:24.443346    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:24.479156    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:24.479167    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:24.517943    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:24.517964    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:24.532898    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:24.532908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:24.554083    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:24.554095    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:24.567645    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:24.567656    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:24.580126    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:24.580138    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:24.585139    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:24.585145    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:24.599887    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:24.599901    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:23.495557    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:23.495794    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:23.517652    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:23.517762    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:23.542139    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:23.542227    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:23.554202    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:23.554287    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:23.565166    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:23.565248    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:23.575334    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:23.575407    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:23.585852    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:23.585923    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:23.595972    4655 logs.go:276] 0 containers: []
	W0916 04:14:23.595986    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:23.596058    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:23.606271    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:23.606288    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:23.606292    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:23.642978    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:23.642990    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:23.656564    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:23.656575    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:23.681395    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:23.681402    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:23.698480    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:23.698491    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:23.710543    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:23.710554    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:23.722165    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:23.722178    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:23.726420    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:23.726427    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:23.747055    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:23.747068    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:23.762120    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:23.762130    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:23.800820    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:23.800833    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:23.819678    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:23.819692    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:23.831891    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:23.831901    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:23.846147    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:23.846158    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:23.866208    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:23.866219    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:26.380248    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:27.113556    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:31.381211    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:31.381457    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:31.400937    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:31.401054    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:31.415247    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:31.415346    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:31.427700    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:31.427789    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:31.446227    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:31.446309    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:31.456934    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:31.457008    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:31.467054    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:31.467123    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:31.477749    4655 logs.go:276] 0 containers: []
	W0916 04:14:31.477761    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:31.477834    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:31.497740    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:31.497758    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:31.497764    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:31.511419    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:31.511432    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:31.522749    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:31.522760    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:31.534885    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:31.534897    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:31.539246    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:31.539253    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:31.553549    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:31.553558    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:31.565657    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:31.565667    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:31.577061    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:31.577071    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:31.591643    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:31.591653    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:31.628841    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:31.628850    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:31.640610    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:31.640621    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:31.658145    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:31.658158    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:31.693039    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:31.693049    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:31.708853    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:31.708864    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:31.720709    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:31.720722    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:32.115841    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:32.116026    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:32.135598    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:32.135714    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:32.149919    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:32.149996    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:32.162140    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:32.162209    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:32.172789    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:32.172872    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:32.191147    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:32.191230    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:32.201730    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:32.201821    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:32.212322    4792 logs.go:276] 0 containers: []
	W0916 04:14:32.212334    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:32.212405    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:32.222864    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:32.222881    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:32.222887    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:32.235823    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:32.235836    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:32.259088    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:32.259100    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:32.276862    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:32.276876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:32.315628    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:32.315639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:32.333829    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:32.333843    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:32.345700    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:32.345712    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:32.357385    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:32.357397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:32.371475    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:32.371484    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:32.375727    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:32.375734    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:32.409157    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:32.409168    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:32.423284    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:32.423294    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:32.447979    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:32.447992    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:32.461017    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:32.461030    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:32.500612    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:32.500631    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:32.532367    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:32.532380    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:32.549860    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:32.549876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:34.246053    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:35.062159    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:39.248358    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:39.248840    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:39.281113    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:39.281266    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:39.301075    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:39.301188    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:39.315371    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:39.315467    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:39.327380    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:39.327467    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:39.338288    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:39.338371    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:39.349834    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:39.349911    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:39.359852    4655 logs.go:276] 0 containers: []
	W0916 04:14:39.359864    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:39.359935    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:39.370644    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:39.370661    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:39.370666    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:39.385983    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:39.385993    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:39.398010    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:39.398021    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:39.442034    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:39.442046    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:39.458127    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:39.458145    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:39.469697    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:39.469707    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:39.481355    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:39.481366    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:39.498353    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:39.498364    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:39.510135    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:39.510146    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:39.549625    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:39.549640    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:39.561608    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:39.561617    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:39.586092    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:39.586105    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:39.590971    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:39.590978    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:39.602364    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:39.602374    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:39.614439    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:39.614451    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:42.130898    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:40.064327    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:40.064495    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:40.076658    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:40.076738    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:40.087751    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:40.087835    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:40.099925    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:40.100017    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:40.110447    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:40.110535    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:40.121116    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:40.121199    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:40.131738    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:40.131818    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:40.146568    4792 logs.go:276] 0 containers: []
	W0916 04:14:40.146580    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:40.146653    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:40.157332    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:40.157350    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:40.157356    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:40.170843    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:40.170853    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:40.182590    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:40.182600    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:40.205139    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:40.205146    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:40.218936    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:40.218945    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:40.230870    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:40.230880    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:40.242492    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:40.242509    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:40.246587    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:40.246597    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:40.259165    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:40.259177    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:40.270960    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:40.270973    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:40.309374    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:40.309389    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:40.325762    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:40.325773    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:40.347933    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:40.347944    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:40.372379    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:40.372390    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:40.383413    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:40.383426    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:40.419426    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:40.419437    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:40.433205    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:40.433220    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:42.971446    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:47.133100    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:47.133288    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:47.150554    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:47.150648    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:47.162736    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:47.162815    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:47.173322    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:47.173406    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:47.183999    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:47.184083    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:47.206293    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:47.206377    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:47.217090    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:47.217167    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:47.227174    4655 logs.go:276] 0 containers: []
	W0916 04:14:47.227185    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:47.227256    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:47.238138    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:47.238155    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:47.238161    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:47.275326    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:47.275336    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:47.280400    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:47.280408    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:47.295332    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:47.295345    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:47.312846    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:47.312857    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:47.337775    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:47.337783    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:47.372659    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:47.372671    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:47.384513    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:47.384525    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:47.396160    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:47.396171    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:47.412178    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:47.412190    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:47.426493    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:47.426503    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:47.440935    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:47.440949    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:47.455763    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:47.455776    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:47.471325    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:47.471335    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:47.483711    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:47.483722    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:47.973993    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:47.974075    4792 kubeadm.go:597] duration metric: took 4m3.9805465s to restartPrimaryControlPlane
	W0916 04:14:47.974157    4792 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 04:14:47.974192    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 04:14:49.012088    4792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037904542s)
	I0916 04:14:49.012179    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 04:14:49.017144    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:14:49.020293    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:14:49.023121    4792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:14:49.023127    4792 kubeadm.go:157] found existing configuration files:
	
	I0916 04:14:49.023151    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf
	I0916 04:14:49.025569    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:14:49.025605    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:14:49.028846    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf
	I0916 04:14:49.032107    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:14:49.032159    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:14:49.035440    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf
	I0916 04:14:49.038254    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:14:49.038302    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:14:49.041178    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf
	I0916 04:14:49.044581    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:14:49.044620    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:14:49.047700    4792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 04:14:49.066537    4792 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 04:14:49.066566    4792 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 04:14:49.116399    4792 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 04:14:49.116452    4792 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 04:14:49.116501    4792 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 04:14:49.166692    4792 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 04:14:49.174808    4792 out.go:235]   - Generating certificates and keys ...
	I0916 04:14:49.174843    4792 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 04:14:49.174876    4792 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 04:14:49.174923    4792 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 04:14:49.174956    4792 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 04:14:49.174991    4792 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 04:14:49.175020    4792 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 04:14:49.175058    4792 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 04:14:49.175097    4792 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 04:14:49.175136    4792 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 04:14:49.175176    4792 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 04:14:49.175197    4792 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 04:14:49.175238    4792 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 04:14:49.250395    4792 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 04:14:49.310547    4792 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 04:14:49.395080    4792 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 04:14:49.425345    4792 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 04:14:49.455496    4792 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 04:14:49.455803    4792 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 04:14:49.455859    4792 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 04:14:49.541588    4792 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 04:14:49.544729    4792 out.go:235]   - Booting up control plane ...
	I0916 04:14:49.544780    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 04:14:49.544830    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 04:14:49.544866    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 04:14:49.544931    4792 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 04:14:49.545099    4792 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 04:14:49.997039    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:53.543834    4792 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.000972 seconds
	I0916 04:14:53.543901    4792 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 04:14:53.548672    4792 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 04:14:54.057541    4792 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 04:14:54.057687    4792 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-716000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 04:14:54.561209    4792 kubeadm.go:310] [bootstrap-token] Using token: in0fi5.n676jkcsrk0svadq
	I0916 04:14:54.565481    4792 out.go:235]   - Configuring RBAC rules ...
	I0916 04:14:54.565545    4792 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 04:14:54.565593    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 04:14:54.571532    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 04:14:54.572535    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 04:14:54.573378    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 04:14:54.574188    4792 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 04:14:54.577148    4792 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 04:14:54.751944    4792 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 04:14:54.964642    4792 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 04:14:54.965064    4792 kubeadm.go:310] 
	I0916 04:14:54.965097    4792 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 04:14:54.965102    4792 kubeadm.go:310] 
	I0916 04:14:54.965142    4792 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 04:14:54.965147    4792 kubeadm.go:310] 
	I0916 04:14:54.965163    4792 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 04:14:54.965189    4792 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 04:14:54.965214    4792 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 04:14:54.965218    4792 kubeadm.go:310] 
	I0916 04:14:54.965265    4792 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 04:14:54.965270    4792 kubeadm.go:310] 
	I0916 04:14:54.965294    4792 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 04:14:54.965297    4792 kubeadm.go:310] 
	I0916 04:14:54.965330    4792 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 04:14:54.965368    4792 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 04:14:54.965407    4792 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 04:14:54.965410    4792 kubeadm.go:310] 
	I0916 04:14:54.965452    4792 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 04:14:54.965499    4792 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 04:14:54.965502    4792 kubeadm.go:310] 
	I0916 04:14:54.965539    4792 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token in0fi5.n676jkcsrk0svadq \
	I0916 04:14:54.965592    4792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 \
	I0916 04:14:54.965603    4792 kubeadm.go:310] 	--control-plane 
	I0916 04:14:54.965607    4792 kubeadm.go:310] 
	I0916 04:14:54.965656    4792 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 04:14:54.965669    4792 kubeadm.go:310] 
	I0916 04:14:54.965725    4792 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token in0fi5.n676jkcsrk0svadq \
	I0916 04:14:54.965776    4792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 
	I0916 04:14:54.965942    4792 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 04:14:54.965951    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:14:54.965960    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:14:54.968602    4792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 04:14:54.975743    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 04:14:54.978787    4792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 04:14:54.986730    4792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 04:14:54.986821    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 04:14:54.986822    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-716000 minikube.k8s.io/updated_at=2024_09_16T04_14_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=stopped-upgrade-716000 minikube.k8s.io/primary=true
	I0916 04:14:54.989954    4792 ops.go:34] apiserver oom_adj: -16
	I0916 04:14:55.036111    4792 kubeadm.go:1113] duration metric: took 49.361416ms to wait for elevateKubeSystemPrivileges
	I0916 04:14:55.036127    4792 kubeadm.go:394] duration metric: took 4m11.056594291s to StartCluster
	I0916 04:14:55.036137    4792 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:14:55.036232    4792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:14:55.036639    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:14:55.036854    4792 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:14:55.036907    4792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 04:14:55.036943    4792 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-716000"
	I0916 04:14:55.036952    4792 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-716000"
	W0916 04:14:55.036983    4792 addons.go:243] addon storage-provisioner should already be in state true
	I0916 04:14:55.036996    4792 host.go:66] Checking if "stopped-upgrade-716000" exists ...
	I0916 04:14:55.036992    4792 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-716000"
	I0916 04:14:55.037012    4792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-716000"
	I0916 04:14:55.036982    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:14:55.037464    4792 retry.go:31] will retry after 890.279497ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/monitor: connect: connection refused
	I0916 04:14:55.038408    4792 kapi.go:59] client config for stopped-upgrade-716000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:14:55.038545    4792 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-716000"
	W0916 04:14:55.038551    4792 addons.go:243] addon default-storageclass should already be in state true
	I0916 04:14:55.038560    4792 host.go:66] Checking if "stopped-upgrade-716000" exists ...
	I0916 04:14:55.039127    4792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 04:14:55.039133    4792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 04:14:55.039140    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:14:55.040760    4792 out.go:177] * Verifying Kubernetes components...
	I0916 04:14:55.048769    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:14:55.136939    4792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:14:55.142647    4792 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:14:55.142720    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:14:55.147066    4792 api_server.go:72] duration metric: took 110.200916ms to wait for apiserver process to appear ...
	I0916 04:14:55.147076    4792 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:14:55.147086    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:55.179217    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 04:14:55.496969    4792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 04:14:55.496984    4792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 04:14:55.934603    4792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:14:54.998151    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:54.998262    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:55.011537    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:14:55.011632    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:55.024232    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:14:55.024319    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:55.036484    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:14:55.036551    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:55.048969    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:14:55.049022    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:55.060568    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:14:55.060649    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:55.071319    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:14:55.071403    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:55.082957    4655 logs.go:276] 0 containers: []
	W0916 04:14:55.082971    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:55.083044    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:55.094916    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:14:55.094934    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:55.094939    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:55.132764    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:14:55.132783    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:14:55.146043    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:14:55.146054    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:14:55.158415    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:14:55.158426    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:55.170276    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:14:55.170288    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:14:55.189043    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:14:55.189058    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:14:55.201355    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:55.201367    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:55.226783    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:14:55.226802    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:14:55.244037    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:14:55.244053    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:14:55.256630    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:55.256641    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:55.261415    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:14:55.261427    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:14:55.277385    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:14:55.277404    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:14:55.297948    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:55.297959    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:55.334680    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:14:55.334693    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:14:55.348828    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:14:55.348844    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:14:57.863652    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:55.938671    4792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:14:55.938680    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 04:14:55.938692    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:14:55.972222    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:15:02.865915    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:02.866160    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:15:02.881822    4655 logs.go:276] 1 containers: [bb77a45fbb50]
	I0916 04:15:02.881923    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:15:02.894111    4655 logs.go:276] 1 containers: [a44a59d44c6e]
	I0916 04:15:02.894201    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:15:02.905689    4655 logs.go:276] 4 containers: [5798ae515cc4 c83fcdb1c777 8869c7622640 e3ad1db7ced5]
	I0916 04:15:02.905778    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:15:02.918912    4655 logs.go:276] 1 containers: [6497fc64f33e]
	I0916 04:15:02.918986    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:15:02.929911    4655 logs.go:276] 1 containers: [c0732c73e3bf]
	I0916 04:15:02.929996    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:15:02.940540    4655 logs.go:276] 1 containers: [6872f5cacb62]
	I0916 04:15:02.940615    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:15:02.950287    4655 logs.go:276] 0 containers: []
	W0916 04:15:02.950299    4655 logs.go:278] No container was found matching "kindnet"
	I0916 04:15:02.950368    4655 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:15:02.960278    4655 logs.go:276] 1 containers: [1b9c4326b62d]
	I0916 04:15:02.960293    4655 logs.go:123] Gathering logs for kube-scheduler [6497fc64f33e] ...
	I0916 04:15:02.960300    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6497fc64f33e"
	I0916 04:15:02.975059    4655 logs.go:123] Gathering logs for kube-proxy [c0732c73e3bf] ...
	I0916 04:15:02.975069    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0732c73e3bf"
	I0916 04:15:02.994501    4655 logs.go:123] Gathering logs for kubelet ...
	I0916 04:15:02.994511    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:15:03.033510    4655 logs.go:123] Gathering logs for coredns [5798ae515cc4] ...
	I0916 04:15:03.033518    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5798ae515cc4"
	I0916 04:15:03.047631    4655 logs.go:123] Gathering logs for coredns [8869c7622640] ...
	I0916 04:15:03.047643    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8869c7622640"
	I0916 04:15:03.059226    4655 logs.go:123] Gathering logs for storage-provisioner [1b9c4326b62d] ...
	I0916 04:15:03.059237    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9c4326b62d"
	I0916 04:15:03.071052    4655 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:15:03.071062    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:15:03.120496    4655 logs.go:123] Gathering logs for kube-apiserver [bb77a45fbb50] ...
	I0916 04:15:03.120506    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb77a45fbb50"
	I0916 04:15:03.134752    4655 logs.go:123] Gathering logs for etcd [a44a59d44c6e] ...
	I0916 04:15:03.134762    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a44a59d44c6e"
	I0916 04:15:03.148329    4655 logs.go:123] Gathering logs for coredns [e3ad1db7ced5] ...
	I0916 04:15:03.148341    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3ad1db7ced5"
	I0916 04:15:03.160234    4655 logs.go:123] Gathering logs for kube-controller-manager [6872f5cacb62] ...
	I0916 04:15:03.160245    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6872f5cacb62"
	I0916 04:15:03.184613    4655 logs.go:123] Gathering logs for dmesg ...
	I0916 04:15:03.184624    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:15:03.188981    4655 logs.go:123] Gathering logs for Docker ...
	I0916 04:15:03.188987    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:15:03.211810    4655 logs.go:123] Gathering logs for container status ...
	I0916 04:15:03.211816    4655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:15:03.223853    4655 logs.go:123] Gathering logs for coredns [c83fcdb1c777] ...
	I0916 04:15:03.223862    4655 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83fcdb1c777"
	I0916 04:15:00.149065    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:00.149094    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:05.738145    4655 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:05.149205    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:05.149228    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:10.740298    4655 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:10.744378    4655 out.go:201] 
	W0916 04:15:10.747150    4655 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 04:15:10.747156    4655 out.go:270] * 
	W0916 04:15:10.747641    4655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:15:10.759296    4655 out.go:201] 
	I0916 04:15:10.149387    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:10.149417    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:15.149691    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:15.149716    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:20.150114    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:20.150152    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:25.150707    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:25.150733    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 04:15:25.498654    4792 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 04:15:25.502869    4792 out.go:177] * Enabled addons: storage-provisioner
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-09-16 11:06:22 UTC, ends at Mon 2024-09-16 11:15:26 UTC. --
	Sep 16 11:15:11 running-upgrade-588000 dockerd[3153]: time="2024-09-16T11:15:11.573576748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:15:11 running-upgrade-588000 dockerd[3153]: time="2024-09-16T11:15:11.573648662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:15:11 running-upgrade-588000 dockerd[3153]: time="2024-09-16T11:15:11.573654662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:15:11 running-upgrade-588000 dockerd[3153]: time="2024-09-16T11:15:11.573820404Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/62fb09bb64ef3601a555b4a8a271206d3d090c3a174b480d3645886d6775d59a pid=19214 runtime=io.containerd.runc.v2
	Sep 16 11:15:12 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:12Z" level=error msg="ContainerStats resp: {0x400077d5c0 linux}"
	Sep 16 11:15:12 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x40004e59c0 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x400035b080 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x400007c2c0 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x400007c400 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x400007c540 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x400045b440 linux}"
	Sep 16 11:15:13 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:13Z" level=error msg="ContainerStats resp: {0x40009c8940 linux}"
	Sep 16 11:15:17 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 16 11:15:22 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:22Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 16 11:15:23 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:23Z" level=error msg="ContainerStats resp: {0x40009866c0 linux}"
	Sep 16 11:15:23 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:23Z" level=error msg="ContainerStats resp: {0x400035b640 linux}"
	Sep 16 11:15:24 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:24Z" level=error msg="ContainerStats resp: {0x400087a540 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x400087b440 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x400087b580 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x400087bd40 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x400077d580 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x400077dc00 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x40007d8940 linux}"
	Sep 16 11:15:25 running-upgrade-588000 cri-dockerd[2991]: time="2024-09-16T11:15:25Z" level=error msg="ContainerStats resp: {0x40007d9080 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	62fb09bb64ef3       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   2d53f3844ad60
	006609b0baba9       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   1dff2f007e350
	5798ae515cc47       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2d53f3844ad60
	c83fcdb1c7775       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1dff2f007e350
	c0732c73e3bfc       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fabdfe0ca2636
	1b9c4326b62d8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   70e68c781e27f
	6872f5cacb629       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8e7c458af287b
	6497fc64f33e7       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c1c9c54c1690c
	a44a59d44c6ea       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   f5c22e1a987df
	bb77a45fbb507       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   88843cb170a90
	
	
	==> coredns [006609b0baba] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 454675055599070263.8180240862843725381. HINFO: read udp 10.244.0.3:35416->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 454675055599070263.8180240862843725381. HINFO: read udp 10.244.0.3:55432->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 454675055599070263.8180240862843725381. HINFO: read udp 10.244.0.3:45859->10.0.2.3:53: i/o timeout
	
	
	==> coredns [5798ae515cc4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:57760->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:52622->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:54395->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:53187->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:36161->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:51969->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:55203->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:38257->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:52171->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 320187370616599891.793025422441848170. HINFO: read udp 10.244.0.2:59732->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [62fb09bb64ef] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3977749444911718498.8665034076632037234. HINFO: read udp 10.244.0.2:53423->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3977749444911718498.8665034076632037234. HINFO: read udp 10.244.0.2:35317->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3977749444911718498.8665034076632037234. HINFO: read udp 10.244.0.2:47358->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c83fcdb1c777] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:59048->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:56326->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:46043->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:49618->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:45059->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:46816->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:49607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:51873->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:52540->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8541005253823202128.1302983096796973801. HINFO: read udp 10.244.0.3:33964->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-588000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-588000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=running-upgrade-588000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T04_11_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:11:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-588000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:15:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:11:09 +0000   Mon, 16 Sep 2024 11:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:11:09 +0000   Mon, 16 Sep 2024 11:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:11:09 +0000   Mon, 16 Sep 2024 11:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:11:09 +0000   Mon, 16 Sep 2024 11:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-588000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 8309996d0fcf47a8a1298eaabcfdb05c
	  System UUID:                8309996d0fcf47a8a1298eaabcfdb05c
	  Boot ID:                    0f4c6eed-fa6e-4ad7-a34b-105985c6c496
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lsrfq                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-xzqcl                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-588000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-588000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-588000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-xfr4t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-588000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-588000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-588000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-588000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-588000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-588000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-588000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-588000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-588000 event: Registered Node running-upgrade-588000 in Controller
	
	
	==> dmesg <==
	[  +1.620539] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.066079] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.062151] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.139514] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.063857] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.060299] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.348890] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +8.683826] systemd-fstab-generator[1926]: Ignoring "noauto" for root device
	[  +2.623298] systemd-fstab-generator[2204]: Ignoring "noauto" for root device
	[  +0.141475] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.105933] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.103797] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +1.526277] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.211518] systemd-fstab-generator[2947]: Ignoring "noauto" for root device
	[  +0.082887] systemd-fstab-generator[2959]: Ignoring "noauto" for root device
	[  +0.066521] systemd-fstab-generator[2970]: Ignoring "noauto" for root device
	[  +0.068438] systemd-fstab-generator[2984]: Ignoring "noauto" for root device
	[  +2.226504] systemd-fstab-generator[3140]: Ignoring "noauto" for root device
	[  +3.455276] systemd-fstab-generator[3790]: Ignoring "noauto" for root device
	[  +1.715209] systemd-fstab-generator[4171]: Ignoring "noauto" for root device
	[Sep16 11:07] kauditd_printk_skb: 68 callbacks suppressed
	[Sep16 11:11] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.570564] systemd-fstab-generator[12266]: Ignoring "noauto" for root device
	[  +5.615206] systemd-fstab-generator[12854]: Ignoring "noauto" for root device
	[  +0.463745] systemd-fstab-generator[12987]: Ignoring "noauto" for root device
	
	
	==> etcd [a44a59d44c6e] <==
	{"level":"info","ts":"2024-09-16T11:11:05.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-16T11:11:05.159Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-16T11:11:05.161Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:11:05.165Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-16T11:11:05.165Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-16T11:11:05.166Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:11:05.166Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:05.617Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:05.623Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:05.623Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-588000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:11:05.623Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:05.623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:05.623Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:05.624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:05.624Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:05.624Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:05.624Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:11:05.628Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 11:15:27 up 9 min,  0 users,  load average: 0.21, 0.27, 0.15
	Linux running-upgrade-588000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bb77a45fbb50] <==
	I0916 11:11:06.859896       1 controller.go:611] quota admission added evaluator for: namespaces
	I0916 11:11:06.860563       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0916 11:11:06.888503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:11:06.889722       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0916 11:11:06.889767       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0916 11:11:06.889809       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0916 11:11:06.889817       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:11:07.612645       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:11:07.800068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:11:07.802959       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:11:07.802986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:11:07.944151       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:11:07.955922       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:11:08.065115       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0916 11:11:08.067068       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0916 11:11:08.067465       1 controller.go:611] quota admission added evaluator for: endpoints
	I0916 11:11:08.068763       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:11:08.952302       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0916 11:11:09.602835       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0916 11:11:09.608643       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0916 11:11:09.622427       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0916 11:11:09.712692       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:11:23.179810       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0916 11:11:23.226097       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:11:24.011891       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6872f5cacb62] <==
	I0916 11:11:22.574113       1 shared_informer.go:262] Caches are synced for disruption
	I0916 11:11:22.574122       1 disruption.go:371] Sending events to api server.
	I0916 11:11:22.575240       1 shared_informer.go:262] Caches are synced for daemon sets
	I0916 11:11:22.575279       1 shared_informer.go:262] Caches are synced for deployment
	I0916 11:11:22.575482       1 shared_informer.go:262] Caches are synced for taint
	I0916 11:11:22.575514       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0916 11:11:22.575535       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-588000. Assuming now as a timestamp.
	I0916 11:11:22.575552       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0916 11:11:22.575631       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0916 11:11:22.575753       1 event.go:294] "Event occurred" object="running-upgrade-588000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-588000 event: Registered Node running-upgrade-588000 in Controller"
	I0916 11:11:22.576375       1 shared_informer.go:262] Caches are synced for PVC protection
	I0916 11:11:22.576402       1 shared_informer.go:262] Caches are synced for persistent volume
	I0916 11:11:22.578589       1 shared_informer.go:262] Caches are synced for resource quota
	I0916 11:11:22.618571       1 shared_informer.go:262] Caches are synced for ephemeral
	I0916 11:11:22.625162       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0916 11:11:22.625206       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0916 11:11:22.625230       1 shared_informer.go:262] Caches are synced for attach detach
	I0916 11:11:22.625489       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0916 11:11:23.040022       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 11:11:23.124897       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 11:11:23.124905       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:11:23.180840       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0916 11:11:23.228665       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xfr4t"
	I0916 11:11:23.426959       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xzqcl"
	I0916 11:11:23.429202       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lsrfq"
	
	
	==> kube-proxy [c0732c73e3bf] <==
	I0916 11:11:23.999351       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0916 11:11:23.999378       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0916 11:11:23.999388       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0916 11:11:24.009900       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0916 11:11:24.009915       1 server_others.go:206] "Using iptables Proxier"
	I0916 11:11:24.009926       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0916 11:11:24.010027       1 server.go:661] "Version info" version="v1.24.1"
	I0916 11:11:24.010034       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:11:24.010259       1 config.go:317] "Starting service config controller"
	I0916 11:11:24.010329       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0916 11:11:24.010370       1 config.go:226] "Starting endpoint slice config controller"
	I0916 11:11:24.010376       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0916 11:11:24.010623       1 config.go:444] "Starting node config controller"
	I0916 11:11:24.010646       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0916 11:11:24.110635       1 shared_informer.go:262] Caches are synced for service config
	I0916 11:11:24.110674       1 shared_informer.go:262] Caches are synced for node config
	I0916 11:11:24.110636       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6497fc64f33e] <==
	W0916 11:11:06.863667       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:11:06.863686       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 11:11:06.863765       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:06.863783       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:06.863918       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:06.863940       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0916 11:11:06.863973       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:11:06.864038       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0916 11:11:06.864143       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:11:06.864184       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0916 11:11:06.864258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:06.864473       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 11:11:06.864521       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:11:06.864539       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0916 11:11:07.711462       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:07.711622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 11:11:07.711493       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:11:07.711759       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0916 11:11:07.814221       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:07.814256       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 11:11:07.892654       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:07.892744       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:07.904280       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:11:07.904294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:11:10.456214       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-09-16 11:06:22 UTC, ends at Mon 2024-09-16 11:15:27 UTC. --
	Sep 16 11:11:10 running-upgrade-588000 kubelet[12860]: I0916 11:11:10.639851   12860 apiserver.go:52] "Watching apiserver"
	Sep 16 11:11:11 running-upgrade-588000 kubelet[12860]: I0916 11:11:11.058575   12860 reconciler.go:157] "Reconciler: start to sync state"
	Sep 16 11:11:11 running-upgrade-588000 kubelet[12860]: E0916 11:11:11.245335   12860 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-588000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-588000"
	Sep 16 11:11:11 running-upgrade-588000 kubelet[12860]: E0916 11:11:11.449661   12860 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-588000\" already exists" pod="kube-system/etcd-running-upgrade-588000"
	Sep 16 11:11:11 running-upgrade-588000 kubelet[12860]: E0916 11:11:11.649129   12860 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-588000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-588000"
	Sep 16 11:11:22 running-upgrade-588000 kubelet[12860]: I0916 11:11:22.511130   12860 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:11:22 running-upgrade-588000 kubelet[12860]: I0916 11:11:22.511558   12860 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:11:22 running-upgrade-588000 kubelet[12860]: I0916 11:11:22.582040   12860 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 11:11:22 running-upgrade-588000 kubelet[12860]: I0916 11:11:22.712942   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/990e006e-8b6a-4ff1-a089-f27ade6a4e77-tmp\") pod \"storage-provisioner\" (UID: \"990e006e-8b6a-4ff1-a089-f27ade6a4e77\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:22 running-upgrade-588000 kubelet[12860]: I0916 11:11:22.712979   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjzk6\" (UniqueName: \"kubernetes.io/projected/990e006e-8b6a-4ff1-a089-f27ade6a4e77-kube-api-access-jjzk6\") pod \"storage-provisioner\" (UID: \"990e006e-8b6a-4ff1-a089-f27ade6a4e77\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.230914   12860 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.319268   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dd5fd0e-0176-4cbf-b813-459ca9678c4f-xtables-lock\") pod \"kube-proxy-xfr4t\" (UID: \"4dd5fd0e-0176-4cbf-b813-459ca9678c4f\") " pod="kube-system/kube-proxy-xfr4t"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.319300   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dd5fd0e-0176-4cbf-b813-459ca9678c4f-kube-proxy\") pod \"kube-proxy-xfr4t\" (UID: \"4dd5fd0e-0176-4cbf-b813-459ca9678c4f\") " pod="kube-system/kube-proxy-xfr4t"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.319312   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dd5fd0e-0176-4cbf-b813-459ca9678c4f-lib-modules\") pod \"kube-proxy-xfr4t\" (UID: \"4dd5fd0e-0176-4cbf-b813-459ca9678c4f\") " pod="kube-system/kube-proxy-xfr4t"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.319333   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zltjt\" (UniqueName: \"kubernetes.io/projected/4dd5fd0e-0176-4cbf-b813-459ca9678c4f-kube-api-access-zltjt\") pod \"kube-proxy-xfr4t\" (UID: \"4dd5fd0e-0176-4cbf-b813-459ca9678c4f\") " pod="kube-system/kube-proxy-xfr4t"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.429130   12860 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.432062   12860 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.520322   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86f5\" (UniqueName: \"kubernetes.io/projected/ad81d3b7-3bce-4d04-9d5b-99851ce10a64-kube-api-access-c86f5\") pod \"coredns-6d4b75cb6d-lsrfq\" (UID: \"ad81d3b7-3bce-4d04-9d5b-99851ce10a64\") " pod="kube-system/coredns-6d4b75cb6d-lsrfq"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.520370   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/190bf1d8-aa94-44ed-be94-dd7ede0fafff-config-volume\") pod \"coredns-6d4b75cb6d-xzqcl\" (UID: \"190bf1d8-aa94-44ed-be94-dd7ede0fafff\") " pod="kube-system/coredns-6d4b75cb6d-xzqcl"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.520384   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad81d3b7-3bce-4d04-9d5b-99851ce10a64-config-volume\") pod \"coredns-6d4b75cb6d-lsrfq\" (UID: \"ad81d3b7-3bce-4d04-9d5b-99851ce10a64\") " pod="kube-system/coredns-6d4b75cb6d-lsrfq"
	Sep 16 11:11:23 running-upgrade-588000 kubelet[12860]: I0916 11:11:23.520395   12860 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxmr\" (UniqueName: \"kubernetes.io/projected/190bf1d8-aa94-44ed-be94-dd7ede0fafff-kube-api-access-7gxmr\") pod \"coredns-6d4b75cb6d-xzqcl\" (UID: \"190bf1d8-aa94-44ed-be94-dd7ede0fafff\") " pod="kube-system/coredns-6d4b75cb6d-xzqcl"
	Sep 16 11:11:24 running-upgrade-588000 kubelet[12860]: I0916 11:11:24.883410   12860 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1dff2f007e350d4f357f4e33e5495cb162ac505a0b83177b12194508736fc4a8"
	Sep 16 11:11:24 running-upgrade-588000 kubelet[12860]: I0916 11:11:24.886899   12860 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2d53f3844ad608c389371565487507ca544caf35746b4828d1081882ae95025f"
	Sep 16 11:15:12 running-upgrade-588000 kubelet[12860]: I0916 11:15:12.170584   12860 scope.go:110] "RemoveContainer" containerID="e3ad1db7ced552ce94a50e0625de4d9a984b9c8e588d3a2d6fafab1fd05cef39"
	Sep 16 11:15:12 running-upgrade-588000 kubelet[12860]: I0916 11:15:12.190075   12860 scope.go:110] "RemoveContainer" containerID="8869c7622640c0bf614632bfddbdc2c828f0acae5a29453f1ef1ce723c0a8b5f"
	
	
	==> storage-provisioner [1b9c4326b62d] <==
	I0916 11:11:23.083995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:23.088549       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:23.088572       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:23.091619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:23.091743       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af688c78-38e1-45ff-b5c4-e8b1fb1658dd", APIVersion:"v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-588000_3a55b22b-f11c-4337-9362-2abc09515f44 became leader
	I0916 11:11:23.091755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-588000_3a55b22b-f11c-4337-9362-2abc09515f44!
	I0916 11:11:23.192800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-588000_3a55b22b-f11c-4337-9362-2abc09515f44!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-588000 -n running-upgrade-588000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-588000 -n running-upgrade-588000: exit status 2 (15.767354166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-588000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-588000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-588000: (3.568096666s)
--- FAIL: TestRunningBinaryUpgrade (594.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.906013833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-711000" primary control-plane node in "kubernetes-upgrade-711000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:08:52.198424    4721 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:08:52.198549    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:08:52.198553    4721 out.go:358] Setting ErrFile to fd 2...
	I0916 04:08:52.198555    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:08:52.198672    4721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:08:52.199719    4721 out.go:352] Setting JSON to false
	I0916 04:08:52.215788    4721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4095,"bootTime":1726480837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:08:52.215858    4721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:08:52.220790    4721 out.go:177] * [kubernetes-upgrade-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:08:52.228678    4721 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:08:52.228736    4721 notify.go:220] Checking for updates...
	I0916 04:08:52.234624    4721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:08:52.237609    4721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:08:52.240653    4721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:08:52.241645    4721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:08:52.244612    4721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:08:52.248006    4721 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:08:52.248067    4721 config.go:182] Loaded profile config "running-upgrade-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:08:52.248123    4721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:08:52.252504    4721 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:08:52.259683    4721 start.go:297] selected driver: qemu2
	I0916 04:08:52.259692    4721 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:08:52.259699    4721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:08:52.261833    4721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:08:52.264619    4721 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:08:52.267808    4721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 04:08:52.267822    4721 cni.go:84] Creating CNI manager for ""
	I0916 04:08:52.267851    4721 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 04:08:52.267883    4721 start.go:340] cluster config:
	{Name:kubernetes-upgrade-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:08:52.271356    4721 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:08:52.277606    4721 out.go:177] * Starting "kubernetes-upgrade-711000" primary control-plane node in "kubernetes-upgrade-711000" cluster
	I0916 04:08:52.281611    4721 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 04:08:52.281625    4721 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 04:08:52.281635    4721 cache.go:56] Caching tarball of preloaded images
	I0916 04:08:52.281690    4721 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:08:52.281696    4721 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 04:08:52.281756    4721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kubernetes-upgrade-711000/config.json ...
	I0916 04:08:52.281770    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kubernetes-upgrade-711000/config.json: {Name:mk22a8682a033b718538dde5224e779255f1f2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:08:52.282017    4721 start.go:360] acquireMachinesLock for kubernetes-upgrade-711000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:08:52.282049    4721 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "kubernetes-upgrade-711000"
	I0916 04:08:52.282059    4721 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:08:52.282081    4721 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:08:52.285627    4721 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:08:52.300581    4721 start.go:159] libmachine.API.Create for "kubernetes-upgrade-711000" (driver="qemu2")
	I0916 04:08:52.300606    4721 client.go:168] LocalClient.Create starting
	I0916 04:08:52.300661    4721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:08:52.300691    4721 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:52.300702    4721 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:52.300742    4721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:08:52.300765    4721 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:52.300774    4721 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:52.301099    4721 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:08:52.543434    4721 main.go:141] libmachine: Creating SSH key...
	I0916 04:08:52.592543    4721 main.go:141] libmachine: Creating Disk image...
	I0916 04:08:52.592550    4721 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:08:52.592758    4721 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:08:52.602106    4721 main.go:141] libmachine: STDOUT: 
	I0916 04:08:52.602124    4721 main.go:141] libmachine: STDERR: 
	I0916 04:08:52.602183    4721 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2 +20000M
	I0916 04:08:52.610366    4721 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:08:52.610387    4721 main.go:141] libmachine: STDERR: 
	I0916 04:08:52.610407    4721 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:08:52.610416    4721 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:08:52.610431    4721 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:08:52.610461    4721 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e1:13:1f:9e:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:08:52.612045    4721 main.go:141] libmachine: STDOUT: 
	I0916 04:08:52.612058    4721 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:08:52.612080    4721 client.go:171] duration metric: took 311.473292ms to LocalClient.Create
	I0916 04:08:54.612658    4721 start.go:128] duration metric: took 2.3306085s to createHost
	I0916 04:08:54.612695    4721 start.go:83] releasing machines lock for "kubernetes-upgrade-711000", held for 2.330687042s
	W0916 04:08:54.612735    4721 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:08:54.621452    4721 out.go:177] * Deleting "kubernetes-upgrade-711000" in qemu2 ...
	W0916 04:08:54.641252    4721 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:08:54.641262    4721 start.go:729] Will try again in 5 seconds ...
	I0916 04:08:59.643454    4721 start.go:360] acquireMachinesLock for kubernetes-upgrade-711000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:08:59.643968    4721 start.go:364] duration metric: took 413µs to acquireMachinesLock for "kubernetes-upgrade-711000"
	I0916 04:08:59.644031    4721 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:08:59.644263    4721 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:08:59.653906    4721 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:08:59.698922    4721 start.go:159] libmachine.API.Create for "kubernetes-upgrade-711000" (driver="qemu2")
	I0916 04:08:59.699013    4721 client.go:168] LocalClient.Create starting
	I0916 04:08:59.699175    4721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:08:59.699258    4721 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:59.699273    4721 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:59.699341    4721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:08:59.699386    4721 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:59.699396    4721 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:59.700227    4721 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:08:59.867080    4721 main.go:141] libmachine: Creating SSH key...
	I0916 04:09:00.005014    4721 main.go:141] libmachine: Creating Disk image...
	I0916 04:09:00.005022    4721 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:09:00.005217    4721 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:09:00.014850    4721 main.go:141] libmachine: STDOUT: 
	I0916 04:09:00.014871    4721 main.go:141] libmachine: STDERR: 
	I0916 04:09:00.014930    4721 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2 +20000M
	I0916 04:09:00.022891    4721 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:09:00.022906    4721 main.go:141] libmachine: STDERR: 
	I0916 04:09:00.022936    4721 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:09:00.022941    4721 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:09:00.022950    4721 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:09:00.022978    4721 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:26:19:11:f7:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:09:00.024741    4721 main.go:141] libmachine: STDOUT: 
	I0916 04:09:00.024759    4721 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:09:00.024771    4721 client.go:171] duration metric: took 325.748333ms to LocalClient.Create
	I0916 04:09:02.026963    4721 start.go:128] duration metric: took 2.382699875s to createHost
	I0916 04:09:02.027043    4721 start.go:83] releasing machines lock for "kubernetes-upgrade-711000", held for 2.383100042s
	W0916 04:09:02.027497    4721 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:09:02.044132    4721 out.go:201] 
	W0916 04:09:02.047375    4721 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:09:02.047410    4721 out.go:270] * 
	* 
	W0916 04:09:02.049343    4721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:09:02.064192    4721 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-711000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-711000: (1.921691333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-711000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-711000 status --format={{.Host}}: exit status 7 (29.261375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181894542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-711000" primary control-plane node in "kubernetes-upgrade-711000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:09:04.056763    4750 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:09:04.057132    4750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:09:04.057136    4750 out.go:358] Setting ErrFile to fd 2...
	I0916 04:09:04.057138    4750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:09:04.057315    4750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:09:04.058700    4750 out.go:352] Setting JSON to false
	I0916 04:09:04.075451    4750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4107,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:09:04.075546    4750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:09:04.080387    4750 out.go:177] * [kubernetes-upgrade-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:09:04.087355    4750 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:09:04.087399    4750 notify.go:220] Checking for updates...
	I0916 04:09:04.095362    4750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:09:04.098368    4750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:09:04.101390    4750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:09:04.104256    4750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:09:04.107352    4750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:09:04.110588    4750 config.go:182] Loaded profile config "kubernetes-upgrade-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 04:09:04.110838    4750 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:09:04.114303    4750 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:09:04.121345    4750 start.go:297] selected driver: qemu2
	I0916 04:09:04.121352    4750 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:09:04.121429    4750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:09:04.123602    4750 cni.go:84] Creating CNI manager for ""
	I0916 04:09:04.123634    4750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:09:04.123658    4750 start.go:340] cluster config:
	{Name:kubernetes-upgrade-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-711000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:09:04.126860    4750 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:09:04.134360    4750 out.go:177] * Starting "kubernetes-upgrade-711000" primary control-plane node in "kubernetes-upgrade-711000" cluster
	I0916 04:09:04.138276    4750 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:09:04.138287    4750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:09:04.138298    4750 cache.go:56] Caching tarball of preloaded images
	I0916 04:09:04.138344    4750 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:09:04.138349    4750 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:09:04.138395    4750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kubernetes-upgrade-711000/config.json ...
	I0916 04:09:04.138864    4750 start.go:360] acquireMachinesLock for kubernetes-upgrade-711000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:09:04.138888    4750 start.go:364] duration metric: took 18.75µs to acquireMachinesLock for "kubernetes-upgrade-711000"
	I0916 04:09:04.138897    4750 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:09:04.138902    4750 fix.go:54] fixHost starting: 
	I0916 04:09:04.139019    4750 fix.go:112] recreateIfNeeded on kubernetes-upgrade-711000: state=Stopped err=<nil>
	W0916 04:09:04.139026    4750 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:09:04.143355    4750 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-711000" ...
	I0916 04:09:04.151338    4750 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:09:04.151375    4750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:26:19:11:f7:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:09:04.153109    4750 main.go:141] libmachine: STDOUT: 
	I0916 04:09:04.153126    4750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:09:04.153154    4750 fix.go:56] duration metric: took 14.252834ms for fixHost
	I0916 04:09:04.153157    4750 start.go:83] releasing machines lock for "kubernetes-upgrade-711000", held for 14.264416ms
	W0916 04:09:04.153161    4750 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:09:04.153192    4750 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:09:04.153195    4750 start.go:729] Will try again in 5 seconds ...
	I0916 04:09:09.154511    4750 start.go:360] acquireMachinesLock for kubernetes-upgrade-711000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:09:09.155099    4750 start.go:364] duration metric: took 484.084µs to acquireMachinesLock for "kubernetes-upgrade-711000"
	I0916 04:09:09.155258    4750 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:09:09.155279    4750 fix.go:54] fixHost starting: 
	I0916 04:09:09.156027    4750 fix.go:112] recreateIfNeeded on kubernetes-upgrade-711000: state=Stopped err=<nil>
	W0916 04:09:09.156052    4750 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:09:09.160559    4750 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-711000" ...
	I0916 04:09:09.166572    4750 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:09:09.166803    4750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:26:19:11:f7:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubernetes-upgrade-711000/disk.qcow2
	I0916 04:09:09.176440    4750 main.go:141] libmachine: STDOUT: 
	I0916 04:09:09.176507    4750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:09:09.176598    4750 fix.go:56] duration metric: took 21.319708ms for fixHost
	I0916 04:09:09.176615    4750 start.go:83] releasing machines lock for "kubernetes-upgrade-711000", held for 21.492458ms
	W0916 04:09:09.176830    4750 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:09:09.185478    4750 out.go:201] 
	W0916 04:09:09.188604    4750 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:09:09.188630    4750 out.go:270] * 
	* 
	W0916 04:09:09.191226    4750 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:09:09.199417    4750 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-711000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-711000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-711000 version --output=json: exit status 1 (60.528ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-711000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 04:09:09.273722 -0700 PDT m=+2965.300131959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-711000 -n kubernetes-upgrade-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-711000 -n kubernetes-upgrade-711000: exit status 7 (32.873625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-711000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-711000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-711000
--- FAIL: TestKubernetesUpgrade (17.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19651
- KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3064991766/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.84s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19651
- KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3379045887/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1132415007 start -p stopped-upgrade-716000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1132415007 start -p stopped-upgrade-716000 --memory=2200 --vm-driver=qemu2 : (52.29135025s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1132415007 -p stopped-upgrade-716000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1132415007 -p stopped-upgrade-716000 stop: (12.0964945s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-716000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0916 04:11:30.712662    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:12:57.169071    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:13:27.615210    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-716000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.469125666s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-716000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-716000" primary control-plane node in "stopped-upgrade-716000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-716000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:10:14.829774    4792 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:10:14.829943    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:10:14.829947    4792 out.go:358] Setting ErrFile to fd 2...
	I0916 04:10:14.829950    4792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:10:14.830079    4792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:10:14.831203    4792 out.go:352] Setting JSON to false
	I0916 04:10:14.850133    4792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4177,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:10:14.850242    4792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:10:14.854281    4792 out.go:177] * [stopped-upgrade-716000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:10:14.873461    4792 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:10:14.873474    4792 notify.go:220] Checking for updates...
	I0916 04:10:14.880317    4792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:10:14.883295    4792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:10:14.886328    4792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:10:14.889330    4792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:10:14.890371    4792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:10:14.893643    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:10:14.897270    4792 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 04:10:14.900307    4792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:10:14.904313    4792 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:10:14.911292    4792 start.go:297] selected driver: qemu2
	I0916 04:10:14.911300    4792 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:14.911361    4792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:10:14.914168    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:10:14.914206    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:10:14.914226    4792 start.go:340] cluster config:
	{Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:14.914283    4792 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:10:14.921319    4792 out.go:177] * Starting "stopped-upgrade-716000" primary control-plane node in "stopped-upgrade-716000" cluster
	I0916 04:10:14.925286    4792 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:10:14.925317    4792 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 04:10:14.925329    4792 cache.go:56] Caching tarball of preloaded images
	I0916 04:10:14.925416    4792 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:10:14.925422    4792 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 04:10:14.925483    4792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/config.json ...
	I0916 04:10:14.925884    4792 start.go:360] acquireMachinesLock for stopped-upgrade-716000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:10:14.925921    4792 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "stopped-upgrade-716000"
	I0916 04:10:14.925931    4792 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:10:14.925936    4792 fix.go:54] fixHost starting: 
	I0916 04:10:14.926045    4792 fix.go:112] recreateIfNeeded on stopped-upgrade-716000: state=Stopped err=<nil>
	W0916 04:10:14.926054    4792 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:10:14.930360    4792 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-716000" ...
	I0916 04:10:14.938521    4792 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:10:14.938636    4792 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50481-:22,hostfwd=tcp::50482-:2376,hostname=stopped-upgrade-716000 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/disk.qcow2
	I0916 04:10:14.986361    4792 main.go:141] libmachine: STDOUT: 
	I0916 04:10:14.986388    4792 main.go:141] libmachine: STDERR: 
	I0916 04:10:14.986396    4792 main.go:141] libmachine: Waiting for VM to start (ssh -p 50481 docker@127.0.0.1)...
	I0916 04:10:35.372200    4792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/config.json ...
	I0916 04:10:35.372487    4792 machine.go:93] provisionDockerMachine start ...
	I0916 04:10:35.372543    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.372698    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.372704    4792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 04:10:35.436053    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 04:10:35.436067    4792 buildroot.go:166] provisioning hostname "stopped-upgrade-716000"
	I0916 04:10:35.436123    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.436243    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.436250    4792 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-716000 && echo "stopped-upgrade-716000" | sudo tee /etc/hostname
	I0916 04:10:35.504520    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-716000
	
	I0916 04:10:35.504584    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.504701    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.504710    4792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-716000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-716000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-716000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 04:10:35.570907    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 04:10:35.570921    4792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1133/.minikube}
	I0916 04:10:35.570930    4792 buildroot.go:174] setting up certificates
	I0916 04:10:35.570942    4792 provision.go:84] configureAuth start
	I0916 04:10:35.570948    4792 provision.go:143] copyHostCerts
	I0916 04:10:35.571033    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem, removing ...
	I0916 04:10:35.571056    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem
	I0916 04:10:35.571171    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.pem (1078 bytes)
	I0916 04:10:35.571387    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem, removing ...
	I0916 04:10:35.571391    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem
	I0916 04:10:35.571448    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/cert.pem (1123 bytes)
	I0916 04:10:35.571577    4792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem, removing ...
	I0916 04:10:35.571581    4792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem
	I0916 04:10:35.571634    4792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1133/.minikube/key.pem (1675 bytes)
	I0916 04:10:35.571742    4792 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-716000 san=[127.0.0.1 localhost minikube stopped-upgrade-716000]
	I0916 04:10:35.612126    4792 provision.go:177] copyRemoteCerts
	I0916 04:10:35.612170    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 04:10:35.612179    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:35.645715    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 04:10:35.652491    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 04:10:35.658974    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 04:10:35.666383    4792 provision.go:87] duration metric: took 95.431667ms to configureAuth
	I0916 04:10:35.666393    4792 buildroot.go:189] setting minikube options for container-runtime
	I0916 04:10:35.666512    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:10:35.666553    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.666633    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.666638    4792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 04:10:35.728238    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 04:10:35.728250    4792 buildroot.go:70] root file system type: tmpfs
	I0916 04:10:35.728310    4792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 04:10:35.728372    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.728490    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.728524    4792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 04:10:35.791394    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 04:10:35.791448    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:35.791554    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:35.791562    4792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 04:10:36.152464    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 04:10:36.152477    4792 machine.go:96] duration metric: took 779.999542ms to provisionDockerMachine
	I0916 04:10:36.152488    4792 start.go:293] postStartSetup for "stopped-upgrade-716000" (driver="qemu2")
	I0916 04:10:36.152495    4792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 04:10:36.152570    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 04:10:36.152580    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:36.185692    4792 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 04:10:36.187082    4792 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 04:10:36.187090    4792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/addons for local assets ...
	I0916 04:10:36.187169    4792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1133/.minikube/files for local assets ...
	I0916 04:10:36.187293    4792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0916 04:10:36.187422    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 04:10:36.190535    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:10:36.197758    4792 start.go:296] duration metric: took 45.264792ms for postStartSetup
	I0916 04:10:36.197771    4792 fix.go:56] duration metric: took 21.272257375s for fixHost
	I0916 04:10:36.197815    4792 main.go:141] libmachine: Using SSH client type: native
	I0916 04:10:36.197914    4792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105111190] 0x1051139d0 <nil>  [] 0s} localhost 50481 <nil> <nil>}
	I0916 04:10:36.197919    4792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 04:10:36.259771    4792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485036.505115171
	
	I0916 04:10:36.259781    4792 fix.go:216] guest clock: 1726485036.505115171
	I0916 04:10:36.259785    4792 fix.go:229] Guest: 2024-09-16 04:10:36.505115171 -0700 PDT Remote: 2024-09-16 04:10:36.197773 -0700 PDT m=+21.390020167 (delta=307.342171ms)
	I0916 04:10:36.259800    4792 fix.go:200] guest clock delta is within tolerance: 307.342171ms
	I0916 04:10:36.259802    4792 start.go:83] releasing machines lock for "stopped-upgrade-716000", held for 21.334299s
	I0916 04:10:36.259874    4792 ssh_runner.go:195] Run: cat /version.json
	I0916 04:10:36.259887    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:10:36.259874    4792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 04:10:36.259947    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	W0916 04:10:36.260449    4792 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50481: connect: connection refused
	I0916 04:10:36.260468    4792 retry.go:31] will retry after 350.641498ms: dial tcp [::1]:50481: connect: connection refused
	W0916 04:10:36.657816    4792 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 04:10:36.657947    4792 ssh_runner.go:195] Run: systemctl --version
	I0916 04:10:36.661457    4792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 04:10:36.664276    4792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 04:10:36.664333    4792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 04:10:36.668809    4792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 04:10:36.675230    4792 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 04:10:36.675244    4792 start.go:495] detecting cgroup driver to use...
	I0916 04:10:36.675349    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:10:36.683943    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 04:10:36.687805    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 04:10:36.691396    4792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 04:10:36.691443    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 04:10:36.695009    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:10:36.698327    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 04:10:36.701430    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 04:10:36.704213    4792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 04:10:36.706995    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 04:10:36.710192    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 04:10:36.713408    4792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 04:10:36.716271    4792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 04:10:36.718990    4792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 04:10:36.722023    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:36.798999    4792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 04:10:36.809636    4792 start.go:495] detecting cgroup driver to use...
	I0916 04:10:36.809715    4792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 04:10:36.815918    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:10:36.824467    4792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 04:10:36.832848    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 04:10:36.837356    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 04:10:36.841821    4792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 04:10:36.898919    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 04:10:36.904247    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 04:10:36.909578    4792 ssh_runner.go:195] Run: which cri-dockerd
	I0916 04:10:36.910765    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 04:10:36.913585    4792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 04:10:36.918492    4792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 04:10:37.000460    4792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 04:10:37.083462    4792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 04:10:37.083525    4792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 04:10:37.088754    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:37.159614    4792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:10:38.324039    4792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164430083s)
	I0916 04:10:38.324123    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 04:10:38.331064    4792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 04:10:38.337083    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:10:38.341564    4792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 04:10:38.410705    4792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 04:10:38.510735    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:38.597988    4792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 04:10:38.604209    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 04:10:38.609398    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:38.697740    4792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 04:10:38.741105    4792 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 04:10:38.741206    4792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 04:10:38.743896    4792 start.go:563] Will wait 60s for crictl version
	I0916 04:10:38.743960    4792 ssh_runner.go:195] Run: which crictl
	I0916 04:10:38.745641    4792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 04:10:38.760469    4792 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 04:10:38.760554    4792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:10:38.776729    4792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 04:10:38.794577    4792 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 04:10:38.794656    4792 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 04:10:38.795978    4792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 04:10:38.799575    4792 kubeadm.go:883] updating cluster {Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 04:10:38.799629    4792 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 04:10:38.799680    4792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:10:38.810562    4792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:10:38.810574    4792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:10:38.810630    4792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:10:38.814256    4792 ssh_runner.go:195] Run: which lz4
	I0916 04:10:38.815620    4792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 04:10:38.817025    4792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 04:10:38.817040    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 04:10:39.778895    4792 docker.go:649] duration metric: took 963.337166ms to copy over tarball
	I0916 04:10:39.778963    4792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 04:10:41.087279    4792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.308327667s)
	I0916 04:10:41.087293    4792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 04:10:41.103841    4792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 04:10:41.106960    4792 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 04:10:41.112105    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:41.197571    4792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 04:10:42.466763    4792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.269199583s)
	I0916 04:10:42.466889    4792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 04:10:42.479495    4792 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 04:10:42.479505    4792 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 04:10:42.479509    4792 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 04:10:42.483843    4792 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:42.485098    4792 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.487292    4792 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:42.487481    4792 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.489627    4792 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.489680    4792 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.490943    4792 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.491251    4792 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.491747    4792 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.492726    4792 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.493109    4792 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:42.494079    4792 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 04:10:42.494179    4792 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.494203    4792 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.496104    4792 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:42.496104    4792 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 04:10:42.870579    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.881581    4792 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 04:10:42.881608    4792 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.881674    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 04:10:42.891642    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 04:10:42.905347    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.915362    4792 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 04:10:42.915381    4792 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.915451    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 04:10:42.925845    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0916 04:10:42.933668    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.943469    4792 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 04:10:42.943489    4792 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.943552    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 04:10:42.954051    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 04:10:42.965669    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.968683    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985507    4792 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 04:10:42.985529    4792 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985596    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 04:10:42.985628    4792 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 04:10:42.985638    4792 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:42.985671    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 04:10:43.000218    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 04:10:43.000294    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0916 04:10:43.001260    4792 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 04:10:43.001320    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 04:10:43.001369    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.014793    4792 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 04:10:43.014816    4792 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 04:10:43.014886    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 04:10:43.019935    4792 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 04:10:43.019955    4792 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.020017    4792 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 04:10:43.028253    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 04:10:43.028394    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 04:10:43.032743    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 04:10:43.032855    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:10:43.033838    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 04:10:43.033850    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 04:10:43.034161    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 04:10:43.034170    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 04:10:43.041978    4792 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 04:10:43.041999    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 04:10:43.097711    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0916 04:10:43.097737    4792 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 04:10:43.097753    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 04:10:43.142575    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 04:10:43.320995    4792 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 04:10:43.321160    4792 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.334381    4792 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 04:10:43.334407    4792 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.334488    4792 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:10:43.348862    4792 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 04:10:43.349008    4792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:10:43.350401    4792 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 04:10:43.350413    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 04:10:43.380219    4792 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 04:10:43.380234    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 04:10:43.630642    4792 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 04:10:43.630675    4792 cache_images.go:92] duration metric: took 1.151181875s to LoadCachedImages
	W0916 04:10:43.630710    4792 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0916 04:10:43.630717    4792 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 04:10:43.630773    4792 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-716000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 04:10:43.630844    4792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 04:10:43.644035    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:10:43.644056    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:10:43.644062    4792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 04:10:43.644072    4792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-716000 NodeName:stopped-upgrade-716000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 04:10:43.644159    4792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-716000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 04:10:43.644240    4792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 04:10:43.647054    4792 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 04:10:43.647088    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 04:10:43.650147    4792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 04:10:43.655288    4792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 04:10:43.660361    4792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 04:10:43.665472    4792 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 04:10:43.666892    4792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 04:10:43.670798    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:10:43.757491    4792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:10:43.764316    4792 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000 for IP: 10.0.2.15
	I0916 04:10:43.764325    4792 certs.go:194] generating shared ca certs ...
	I0916 04:10:43.764335    4792 certs.go:226] acquiring lock for ca certs: {Name:mk7bbdd60870074cef3b6b7f58dae6ae1dc0ffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.764516    4792 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key
	I0916 04:10:43.764568    4792 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key
	I0916 04:10:43.764575    4792 certs.go:256] generating profile certs ...
	I0916 04:10:43.764651    4792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key
	I0916 04:10:43.764670    4792 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31
	I0916 04:10:43.764678    4792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 04:10:43.853550    4792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 ...
	I0916 04:10:43.853562    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31: {Name:mke3c93083ff8ba32761762450527a69939c89bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.854113    4792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31 ...
	I0916 04:10:43.854120    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31: {Name:mkd50dd7bba0e5318d7c3f16600658e8553bb63f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:43.854277    4792 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt.ff75fb31 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt
	I0916 04:10:43.854402    4792 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key.ff75fb31 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key
	I0916 04:10:43.854557    4792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.key
	I0916 04:10:43.854705    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem (1338 bytes)
	W0916 04:10:43.854736    4792 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0916 04:10:43.854742    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 04:10:43.854768    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem (1078 bytes)
	I0916 04:10:43.854786    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem (1123 bytes)
	I0916 04:10:43.854804    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/key.pem (1675 bytes)
	I0916 04:10:43.854869    4792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0916 04:10:43.855273    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 04:10:43.863014    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 04:10:43.873402    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 04:10:43.881474    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 04:10:43.889061    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 04:10:43.895285    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 04:10:43.902152    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 04:10:43.909558    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 04:10:43.916479    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 04:10:43.923077    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0916 04:10:43.930380    4792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0916 04:10:43.937499    4792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 04:10:43.942353    4792 ssh_runner.go:195] Run: openssl version
	I0916 04:10:43.944222    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 04:10:43.947025    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.948459    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.948486    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 04:10:43.950075    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 04:10:43.953010    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0916 04:10:43.955818    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.957112    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:35 /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.957135    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0916 04:10:43.959484    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0916 04:10:43.962589    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0916 04:10:43.966007    4792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.967488    4792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:35 /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.967509    4792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0916 04:10:43.969399    4792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 04:10:43.972210    4792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 04:10:43.973529    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 04:10:43.975722    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 04:10:43.977480    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 04:10:43.979254    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 04:10:43.980970    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 04:10:43.982628    4792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 04:10:43.984501    4792 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50516 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 04:10:43.984579    4792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:10:43.995167    4792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 04:10:43.998332    4792 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 04:10:43.998344    4792 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 04:10:43.998371    4792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 04:10:44.001736    4792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 04:10:44.002043    4792 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-716000" does not appear in /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:10:44.002156    4792 kubeconfig.go:62] /Users/jenkins/minikube-integration/19651-1133/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-716000" cluster setting kubeconfig missing "stopped-upgrade-716000" context setting]
	I0916 04:10:44.002387    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:10:44.002819    4792 kapi.go:59] client config for stopped-upgrade-716000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:10:44.003153    4792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 04:10:44.005880    4792 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-716000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 04:10:44.005888    4792 kubeadm.go:1160] stopping kube-system containers ...
	I0916 04:10:44.005935    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 04:10:44.016566    4792 docker.go:483] Stopping containers: [97104cca0786 40ab2f675d22 03acb758d55b 1973f852f436 dbc4a78c163a a460b4b3d0a7 609e5463648c 7353da114ab2 fe7b62f74c09]
	I0916 04:10:44.016648    4792 ssh_runner.go:195] Run: docker stop 97104cca0786 40ab2f675d22 03acb758d55b 1973f852f436 dbc4a78c163a a460b4b3d0a7 609e5463648c 7353da114ab2 fe7b62f74c09
	I0916 04:10:44.027209    4792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 04:10:44.033431    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:10:44.036194    4792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:10:44.036199    4792 kubeadm.go:157] found existing configuration files:
	
	I0916 04:10:44.036227    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf
	I0916 04:10:44.038709    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:10:44.038745    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:10:44.041646    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf
	I0916 04:10:44.044089    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:10:44.044112    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:10:44.046735    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf
	I0916 04:10:44.049764    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:10:44.049790    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:10:44.052288    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf
	I0916 04:10:44.054773    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:10:44.054799    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:10:44.057705    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:10:44.060161    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.082455    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.543818    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.679302    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.702248    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 04:10:44.723416    4792 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:10:44.723497    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:45.225626    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:45.725541    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:10:45.729808    4792 api_server.go:72] duration metric: took 1.006414458s to wait for apiserver process to appear ...
	I0916 04:10:45.729824    4792 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:10:45.729833    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:50.731987    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:50.732108    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:10:55.732879    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:10:55.732903    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:00.733337    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:00.733380    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:05.734105    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:05.734130    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:10.734889    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:10.734925    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:15.735965    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:15.736015    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:20.737495    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:20.737538    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:25.739286    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:25.739314    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:30.741470    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:30.741522    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:35.743775    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:35.743824    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:40.745931    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:40.745974    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:45.748092    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:45.748261    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:11:45.761000    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:11:45.761088    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:11:45.771684    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:11:45.771770    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:11:45.782494    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:11:45.782572    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:11:45.793003    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:11:45.793084    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:11:45.810395    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:11:45.810472    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:11:45.821657    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:11:45.821744    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:11:45.831968    4792 logs.go:276] 0 containers: []
	W0916 04:11:45.831980    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:11:45.832052    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:11:45.848798    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:11:45.848817    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:11:45.848824    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:11:45.889223    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:11:45.889237    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:11:45.904041    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:11:45.904054    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:11:45.915681    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:11:45.915693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:11:45.926809    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:11:45.926822    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:11:45.939647    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:11:45.939658    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:11:46.034820    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:11:46.034835    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:11:46.046326    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:11:46.046357    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:11:46.070178    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:11:46.070185    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:11:46.084035    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:11:46.084044    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:11:46.098207    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:11:46.098215    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:11:46.120852    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:11:46.120864    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:11:46.133052    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:11:46.133065    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:11:46.145968    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:11:46.145978    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:11:46.150850    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:11:46.150857    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:11:46.194477    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:11:46.194489    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:11:46.206044    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:11:46.206055    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:11:48.724436    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:11:53.726729    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:11:53.726900    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:11:53.748129    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:11:53.748231    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:11:53.761558    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:11:53.761649    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:11:53.771691    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:11:53.771784    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:11:53.782195    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:11:53.782292    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:11:53.792953    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:11:53.793039    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:11:53.807829    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:11:53.807909    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:11:53.819341    4792 logs.go:276] 0 containers: []
	W0916 04:11:53.819353    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:11:53.819423    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:11:53.830063    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:11:53.830081    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:11:53.830087    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:11:53.841518    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:11:53.841528    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:11:53.855237    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:11:53.855246    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:11:53.877790    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:11:53.877802    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:11:53.898243    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:11:53.898253    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:11:53.924521    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:11:53.924533    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:11:53.960889    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:11:53.960901    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:11:53.979596    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:11:53.979606    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:11:53.991918    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:11:53.991928    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:11:54.030601    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:11:54.030613    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:11:54.043459    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:11:54.043469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:11:54.055683    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:11:54.055693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:11:54.072129    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:11:54.072140    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:11:54.084876    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:11:54.084890    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:11:54.096871    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:11:54.096885    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:11:54.101481    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:11:54.101490    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:11:54.139849    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:11:54.139860    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:11:56.655930    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:01.658292    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:01.658513    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:01.678730    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:01.678843    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:01.693133    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:01.693219    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:01.709430    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:01.709527    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:01.719853    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:01.719936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:01.730770    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:01.730850    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:01.741628    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:01.741714    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:01.752087    4792 logs.go:276] 0 containers: []
	W0916 04:12:01.752101    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:01.752168    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:01.762314    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:01.762332    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:01.762338    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:01.774009    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:01.774020    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:01.785711    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:01.785725    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:01.802890    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:01.802899    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:01.841436    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:01.841445    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:01.862677    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:01.862692    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:01.875126    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:01.875140    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:01.888933    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:01.888944    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:01.925539    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:01.925552    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:01.940174    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:01.940183    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:01.952531    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:01.952544    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:01.964293    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:01.964303    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:01.975399    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:01.975410    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:01.987038    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:01.987049    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:01.991368    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:01.991374    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:02.029151    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:02.029163    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:02.055072    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:02.055084    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:04.570939    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:09.572873    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:09.572997    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:09.583649    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:09.583734    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:09.594322    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:09.594408    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:09.604576    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:09.604655    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:09.615359    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:09.615439    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:09.625954    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:09.626038    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:09.636601    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:09.636692    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:09.647036    4792 logs.go:276] 0 containers: []
	W0916 04:12:09.647049    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:09.647124    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:09.657846    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:09.657863    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:09.657869    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:09.678132    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:09.678143    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:09.715111    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:09.715128    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:09.756696    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:09.756713    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:09.768834    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:09.768850    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:09.781757    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:09.781768    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:09.793351    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:09.793362    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:09.837439    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:09.837467    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:09.852197    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:09.852209    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:09.866518    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:09.866528    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:09.877932    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:09.877946    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:09.888937    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:09.888948    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:09.901152    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:09.901163    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:09.925362    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:09.925369    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:09.936980    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:09.936993    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:09.941468    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:09.941476    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:09.955594    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:09.955609    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:12.479963    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:17.481676    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:17.481890    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:17.498274    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:17.498370    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:17.510539    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:17.510627    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:17.521875    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:17.521955    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:17.536396    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:17.536480    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:17.546600    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:17.546676    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:17.557286    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:17.557369    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:17.567885    4792 logs.go:276] 0 containers: []
	W0916 04:12:17.567895    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:17.567956    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:17.578564    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:17.578584    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:17.578589    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:17.592625    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:17.592638    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:17.605839    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:17.605851    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:17.618361    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:17.618372    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:17.656569    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:17.656581    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:17.661364    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:17.661374    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:17.672083    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:17.672094    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:17.692329    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:17.692340    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:17.703886    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:17.703895    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:17.729036    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:17.729043    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:17.768102    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:17.768117    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:17.783953    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:17.783962    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:17.796590    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:17.796600    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:17.834867    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:17.834884    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:17.853039    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:17.853049    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:17.865870    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:17.865881    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:17.878973    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:17.878986    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:20.396674    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:25.398916    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:25.399164    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:25.418882    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:25.418984    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:25.433094    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:25.433184    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:25.445134    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:25.445218    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:25.456106    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:25.456195    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:25.466293    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:25.466374    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:25.479454    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:25.479542    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:25.489740    4792 logs.go:276] 0 containers: []
	W0916 04:12:25.489753    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:25.489830    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:25.500428    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:25.500446    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:25.500451    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:25.539346    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:25.539354    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:25.581930    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:25.581941    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:25.596580    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:25.596591    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:25.608730    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:25.608744    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:25.620527    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:25.620538    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:25.633408    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:25.633420    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:25.645484    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:25.645499    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:25.661017    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:25.661027    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:25.683219    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:25.683227    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:25.695113    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:25.695122    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:25.708942    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:25.708953    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:25.725530    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:25.725549    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:25.738374    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:25.738390    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:25.743077    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:25.743095    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:25.778883    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:25.778897    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:25.803174    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:25.803188    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:28.331783    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:33.334042    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:33.334282    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:33.350832    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:33.350936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:33.363877    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:33.363969    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:33.374661    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:33.374741    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:33.384993    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:33.385089    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:33.395674    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:33.395755    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:33.406161    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:33.406242    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:33.416766    4792 logs.go:276] 0 containers: []
	W0916 04:12:33.416778    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:33.416845    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:33.428483    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:33.428501    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:33.428506    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:33.433721    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:33.433729    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:33.446718    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:33.446732    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:33.485927    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:33.485939    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:33.510936    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:33.510949    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:33.528550    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:33.528559    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:33.541022    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:33.541035    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:33.553904    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:33.553912    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:33.593278    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:33.593289    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:33.615798    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:33.615809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:33.632224    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:33.632237    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:33.654483    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:33.654490    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:33.672882    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:33.672897    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:33.686618    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:33.686631    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:33.726812    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:33.726832    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:33.743667    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:33.743676    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:33.757102    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:33.757113    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:36.283799    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:41.286105    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:41.286496    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:41.318396    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:41.318552    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:41.339224    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:41.339333    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:41.353304    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:41.353389    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:41.365733    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:41.365821    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:41.377171    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:41.377260    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:41.388277    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:41.388363    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:41.399304    4792 logs.go:276] 0 containers: []
	W0916 04:12:41.399316    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:41.399395    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:41.411438    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:41.411455    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:41.411462    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:41.426390    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:41.426402    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:41.441784    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:41.441801    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:41.464101    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:41.464112    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:41.475959    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:41.475970    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:41.516052    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:41.516066    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:41.520588    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:41.520601    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:41.558238    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:41.558248    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:41.570688    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:41.570698    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:41.596871    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:41.596884    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:41.610808    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:41.610821    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:41.637132    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:41.637147    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:41.682324    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:41.682336    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:41.697027    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:41.697037    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:41.708755    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:41.708766    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:41.719951    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:41.719962    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:41.735979    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:41.735990    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:44.249231    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:49.250308    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:49.250372    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:49.261796    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:49.261849    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:49.273582    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:49.273635    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:49.285371    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:49.285442    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:49.296853    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:49.296943    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:49.308493    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:49.308577    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:49.320310    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:49.320394    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:49.331278    4792 logs.go:276] 0 containers: []
	W0916 04:12:49.331289    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:49.331361    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:49.342023    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:49.342042    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:49.342048    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:49.354623    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:49.354639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:49.368581    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:49.368593    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:49.406896    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:49.406908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:49.422295    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:49.422312    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:49.445405    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:49.445423    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:49.457990    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:49.458004    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:49.473990    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:49.474002    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:49.513185    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:49.513196    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:49.526262    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:49.526273    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:49.538782    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:49.538794    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:12:49.564632    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:49.564644    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:49.581708    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:49.581719    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:49.596689    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:49.596700    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:49.608766    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:49.608776    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:49.647269    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:49.647282    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:49.651652    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:49.651658    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:52.168020    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:12:57.170189    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:12:57.170296    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:12:57.181600    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:12:57.181684    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:12:57.193042    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:12:57.193141    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:12:57.210744    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:12:57.210832    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:12:57.222452    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:12:57.222541    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:12:57.234034    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:12:57.234125    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:12:57.245132    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:12:57.245213    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:12:57.256628    4792 logs.go:276] 0 containers: []
	W0916 04:12:57.256642    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:12:57.256711    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:12:57.268124    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:12:57.268145    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:12:57.268151    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:12:57.305184    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:12:57.305197    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:12:57.320803    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:12:57.320814    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:12:57.338013    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:12:57.338025    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:12:57.350655    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:12:57.350668    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:12:57.380969    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:12:57.380980    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:12:57.394024    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:12:57.394035    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:12:57.406058    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:12:57.406067    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:12:57.418566    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:12:57.418577    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:12:57.440700    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:12:57.440712    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:12:57.454136    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:12:57.454149    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:12:57.492222    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:12:57.492233    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:12:57.496275    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:12:57.496282    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:12:57.534364    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:12:57.534377    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:12:57.545633    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:12:57.545644    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:12:57.557544    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:12:57.557555    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:12:57.578898    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:12:57.578915    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:00.105337    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:05.107470    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:05.107577    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:05.122702    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:05.122788    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:05.134030    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:05.134119    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:05.145768    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:05.145857    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:05.163606    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:05.163692    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:05.174689    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:05.174768    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:05.186054    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:05.186136    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:05.197639    4792 logs.go:276] 0 containers: []
	W0916 04:13:05.197654    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:05.197723    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:05.209557    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:05.209576    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:05.209582    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:05.224822    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:05.224835    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:05.236971    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:05.236979    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:05.259161    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:05.259171    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:05.271465    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:05.271474    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:05.282673    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:05.282684    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:05.293585    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:05.293597    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:05.298034    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:05.298040    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:05.312518    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:05.312531    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:05.326127    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:05.326141    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:05.338250    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:05.338261    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:05.361492    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:05.361499    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:05.395363    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:05.395378    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:05.434184    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:05.434199    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:05.451905    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:05.451917    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:05.465025    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:05.465038    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:05.476548    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:05.476565    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:08.017974    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:13.020054    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:13.020146    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:13.031835    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:13.031922    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:13.043329    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:13.043415    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:13.054161    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:13.054248    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:13.068462    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:13.068546    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:13.079839    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:13.079924    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:13.093155    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:13.093240    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:13.104492    4792 logs.go:276] 0 containers: []
	W0916 04:13:13.104503    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:13.104575    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:13.115464    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:13.115484    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:13.115490    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:13.151204    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:13.151219    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:13.166415    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:13.166427    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:13.187838    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:13.187850    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:13.200211    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:13.200221    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:13.223294    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:13.223304    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:13.234829    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:13.234840    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:13.254065    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:13.254077    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:13.267519    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:13.267527    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:13.278770    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:13.278781    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:13.290508    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:13.290520    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:13.326473    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:13.326481    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:13.365457    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:13.365469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:13.376511    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:13.376522    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:13.405128    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:13.405142    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:13.416384    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:13.416399    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:13.420803    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:13.420809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:15.934153    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:20.936422    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:20.936517    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:20.947568    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:20.947646    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:20.958011    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:20.958099    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:20.972702    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:20.972785    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:20.988068    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:20.988155    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:21.002554    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:21.002641    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:21.014562    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:21.014642    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:21.028922    4792 logs.go:276] 0 containers: []
	W0916 04:13:21.028935    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:21.029013    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:21.039597    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:21.039616    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:21.039622    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:21.054243    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:21.054253    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:21.065899    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:21.065910    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:21.077219    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:21.077230    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:21.112917    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:21.112925    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:21.147863    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:21.147874    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:21.162504    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:21.162515    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:21.174482    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:21.174493    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:21.178593    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:21.178602    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:21.215990    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:21.216004    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:21.227899    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:21.227910    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:21.251272    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:21.251281    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:21.265181    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:21.265192    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:21.286014    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:21.286023    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:21.301354    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:21.301369    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:21.313191    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:21.313204    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:21.330402    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:21.330413    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:23.843785    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:28.845970    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:28.846154    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:28.857361    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:28.857444    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:28.867512    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:28.867595    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:28.877414    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:28.877504    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:28.887876    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:28.887954    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:28.898033    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:28.898106    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:28.909311    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:28.909383    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:28.919521    4792 logs.go:276] 0 containers: []
	W0916 04:13:28.919533    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:28.919606    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:28.934792    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:28.934808    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:28.934814    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:28.973304    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:28.973317    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:29.013953    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:29.013972    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:29.026415    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:29.026427    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:29.039231    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:29.039242    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:29.077340    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:29.077351    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:29.091840    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:29.091851    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:29.105510    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:29.105519    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:29.126637    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:29.126649    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:29.138651    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:29.138662    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:29.155898    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:29.155913    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:29.167146    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:29.167156    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:29.179926    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:29.179935    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:29.191850    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:29.191860    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:29.214847    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:29.214863    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:29.218970    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:29.218977    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:29.232856    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:29.232866    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:31.747316    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:36.748359    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:36.748529    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:36.759040    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:36.759127    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:36.779604    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:36.779687    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:36.790030    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:36.790106    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:36.800626    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:36.800710    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:36.810942    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:36.811016    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:36.821537    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:36.821608    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:36.832023    4792 logs.go:276] 0 containers: []
	W0916 04:13:36.832034    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:36.832097    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:36.842480    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:36.842497    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:36.842502    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:36.853504    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:36.853516    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:36.865141    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:36.865152    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:36.900052    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:36.900064    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:36.912666    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:36.912676    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:36.930811    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:36.930825    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:36.942516    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:36.942532    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:36.978248    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:36.978255    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:36.982408    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:36.982415    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:36.996763    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:36.996775    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:37.014093    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:37.014106    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:37.026290    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:37.026306    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:37.043489    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:37.043499    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:37.066121    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:37.066129    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:37.080242    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:37.080252    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:37.117945    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:37.117955    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:37.139282    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:37.139296    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:39.653456    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:44.655673    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:44.655839    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:44.667094    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:44.667175    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:44.678059    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:44.678147    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:44.689023    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:44.689107    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:44.699658    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:44.699743    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:44.715500    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:44.715584    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:44.726143    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:44.726223    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:44.736416    4792 logs.go:276] 0 containers: []
	W0916 04:13:44.736427    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:44.736494    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:44.747220    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:44.747238    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:44.747244    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:44.783862    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:44.783876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:44.804986    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:44.804999    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:44.816704    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:44.816715    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:44.830598    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:44.830613    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:44.847125    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:44.847135    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:44.858817    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:44.858828    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:44.872558    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:44.872569    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:44.884103    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:44.884117    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:44.896001    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:44.896011    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:44.899999    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:44.900008    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:44.936096    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:44.936109    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:44.976150    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:44.976164    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:44.990401    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:44.990414    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:45.015887    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:45.015908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:45.030452    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:45.030463    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:45.048774    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:45.048788    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:47.563662    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:13:52.565817    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:13:52.565955    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:13:52.578035    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:13:52.578122    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:13:52.588740    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:13:52.588818    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:13:52.599299    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:13:52.599385    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:13:52.609965    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:13:52.610039    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:13:52.620925    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:13:52.621008    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:13:52.631569    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:13:52.631641    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:13:52.642027    4792 logs.go:276] 0 containers: []
	W0916 04:13:52.642038    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:13:52.642110    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:13:52.656832    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:13:52.656851    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:13:52.656857    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:13:52.671412    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:13:52.671421    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:13:52.685877    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:13:52.685887    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:13:52.697138    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:13:52.697149    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:13:52.718319    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:13:52.718330    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:13:52.731776    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:13:52.731787    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:13:52.770963    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:13:52.770973    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:13:52.784283    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:13:52.784297    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:13:52.795680    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:13:52.795693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:13:52.808470    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:13:52.808484    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:13:52.820692    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:13:52.820704    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:13:52.825161    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:13:52.825167    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:13:52.842795    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:13:52.842809    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:13:52.854617    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:13:52.854628    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:13:52.888719    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:13:52.888733    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:13:52.935456    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:13:52.935469    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:13:52.947397    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:13:52.947407    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:13:55.471553    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:00.473808    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:00.473936    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:00.488797    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:00.488873    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:00.499635    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:00.499721    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:00.510216    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:00.510300    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:00.521125    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:00.521210    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:00.535502    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:00.535583    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:00.546090    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:00.546169    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:00.556307    4792 logs.go:276] 0 containers: []
	W0916 04:14:00.556321    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:00.556397    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:00.566584    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:00.566603    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:00.566609    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:00.604459    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:00.604471    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:00.646182    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:00.646195    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:00.657933    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:00.657946    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:00.670851    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:00.670868    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:00.683157    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:00.683166    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:00.705194    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:00.705203    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:00.719361    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:00.719370    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:00.740374    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:00.740383    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:00.752114    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:00.752127    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:00.769807    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:00.769817    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:00.781014    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:00.781025    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:00.792040    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:00.792052    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:00.805250    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:00.805266    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:00.809753    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:00.809760    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:00.843652    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:00.843663    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:00.857388    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:00.857397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:03.373017    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:08.375167    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:08.375291    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:08.403367    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:08.403456    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:08.415773    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:08.415863    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:08.427900    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:08.427980    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:08.438875    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:08.438957    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:08.450058    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:08.450141    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:08.460493    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:08.460578    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:08.470824    4792 logs.go:276] 0 containers: []
	W0916 04:14:08.470835    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:08.470908    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:08.481080    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:08.481099    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:08.481105    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:08.518233    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:08.518243    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:08.529422    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:08.529433    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:08.543343    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:08.543355    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:08.581578    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:08.581591    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:08.595331    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:08.595342    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:08.606714    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:08.606724    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:08.629504    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:08.629514    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:08.633995    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:08.634003    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:08.648325    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:08.648336    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:08.663162    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:08.663175    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:08.674299    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:08.674310    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:08.685787    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:08.685797    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:08.703720    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:08.703730    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:08.737850    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:08.737865    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:08.750149    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:08.750162    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:08.770780    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:08.770791    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:11.285443    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:16.287841    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:16.288037    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:16.307410    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:16.307501    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:16.319507    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:16.319600    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:16.330135    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:16.330213    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:16.340756    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:16.340841    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:16.351091    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:16.351166    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:16.361968    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:16.362045    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:16.372850    4792 logs.go:276] 0 containers: []
	W0916 04:14:16.372864    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:16.372925    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:16.388444    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:16.388464    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:16.388470    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:16.426545    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:16.426556    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:16.447624    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:16.447634    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:16.460065    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:16.460074    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:16.483408    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:16.483418    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:16.495039    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:16.495050    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:16.508188    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:16.508202    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:16.519310    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:16.519322    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:16.557142    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:16.557150    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:16.561200    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:16.561210    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:16.575257    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:16.575266    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:16.592399    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:16.592410    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:16.604077    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:16.604088    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:16.617121    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:16.617131    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:16.652505    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:16.652515    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:16.666794    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:16.666807    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:16.678996    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:16.679006    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:19.201844    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:24.204557    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:24.204734    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:24.220983    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:24.221091    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:24.234155    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:24.234242    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:24.245259    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:24.245339    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:24.259616    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:24.259702    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:24.269996    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:24.270083    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:24.281137    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:24.281221    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:24.291613    4792 logs.go:276] 0 containers: []
	W0916 04:14:24.291628    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:24.291706    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:24.302279    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:24.302302    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:24.302309    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:24.313913    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:24.313928    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:24.331373    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:24.331387    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:24.342679    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:24.342689    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:24.379238    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:24.379250    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:24.393372    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:24.393386    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:24.405423    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:24.405434    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:24.419757    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:24.419769    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:24.443334    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:24.443346    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:24.479156    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:24.479167    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:24.517943    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:24.517964    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:24.532898    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:24.532908    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:24.554083    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:24.554095    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:24.567645    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:24.567656    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:24.580126    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:24.580138    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:24.585139    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:24.585145    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:24.599887    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:24.599901    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:27.113556    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:32.115841    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:32.116026    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:32.135598    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:32.135714    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:32.149919    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:32.149996    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:32.162140    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:32.162209    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:32.172789    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:32.172872    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:32.191147    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:32.191230    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:32.201730    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:32.201821    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:32.212322    4792 logs.go:276] 0 containers: []
	W0916 04:14:32.212334    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:32.212405    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:32.222864    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:32.222881    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:32.222887    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:32.235823    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:32.235836    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:32.259088    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:32.259100    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:32.276862    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:32.276876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:32.315628    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:32.315639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:32.333829    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:32.333843    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:32.345700    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:32.345712    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:32.357385    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:32.357397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:32.371475    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:32.371484    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:32.375727    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:32.375734    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:32.409157    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:32.409168    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:32.423284    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:32.423294    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:32.447979    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:32.447992    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:32.461017    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:32.461030    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:32.500612    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:32.500631    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:32.532367    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:32.532380    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:32.549860    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:32.549876    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:35.062159    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:40.064327    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:40.064495    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:14:40.076658    4792 logs.go:276] 2 containers: [99668d812a17 40ab2f675d22]
	I0916 04:14:40.076738    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:14:40.087751    4792 logs.go:276] 2 containers: [751f46d9a26d 1973f852f436]
	I0916 04:14:40.087835    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:14:40.099925    4792 logs.go:276] 1 containers: [ed5c57a396a3]
	I0916 04:14:40.100017    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:14:40.110447    4792 logs.go:276] 2 containers: [41d020d3dedc 609e5463648c]
	I0916 04:14:40.110535    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:14:40.121116    4792 logs.go:276] 1 containers: [0bd7f5ba91b9]
	I0916 04:14:40.121199    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:14:40.131738    4792 logs.go:276] 2 containers: [7a56c27a016a 97104cca0786]
	I0916 04:14:40.131818    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:14:40.146568    4792 logs.go:276] 0 containers: []
	W0916 04:14:40.146580    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:14:40.146653    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:14:40.157332    4792 logs.go:276] 2 containers: [03045b2f9560 ed1b9ebe2601]
	I0916 04:14:40.157350    4792 logs.go:123] Gathering logs for kube-controller-manager [97104cca0786] ...
	I0916 04:14:40.157356    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97104cca0786"
	I0916 04:14:40.170843    4792 logs.go:123] Gathering logs for storage-provisioner [03045b2f9560] ...
	I0916 04:14:40.170853    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03045b2f9560"
	I0916 04:14:40.182590    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:14:40.182600    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:14:40.205139    4792 logs.go:123] Gathering logs for etcd [751f46d9a26d] ...
	I0916 04:14:40.205146    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 751f46d9a26d"
	I0916 04:14:40.218936    4792 logs.go:123] Gathering logs for coredns [ed5c57a396a3] ...
	I0916 04:14:40.218945    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5c57a396a3"
	I0916 04:14:40.230870    4792 logs.go:123] Gathering logs for kube-proxy [0bd7f5ba91b9] ...
	I0916 04:14:40.230880    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd7f5ba91b9"
	I0916 04:14:40.242492    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:14:40.242509    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:14:40.246587    4792 logs.go:123] Gathering logs for kube-scheduler [41d020d3dedc] ...
	I0916 04:14:40.246597    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d020d3dedc"
	I0916 04:14:40.259165    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:14:40.259177    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:14:40.270960    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:14:40.270973    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:14:40.309374    4792 logs.go:123] Gathering logs for etcd [1973f852f436] ...
	I0916 04:14:40.309389    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1973f852f436"
	I0916 04:14:40.325762    4792 logs.go:123] Gathering logs for kube-scheduler [609e5463648c] ...
	I0916 04:14:40.325773    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 609e5463648c"
	I0916 04:14:40.347933    4792 logs.go:123] Gathering logs for kube-controller-manager [7a56c27a016a] ...
	I0916 04:14:40.347944    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56c27a016a"
	I0916 04:14:40.372379    4792 logs.go:123] Gathering logs for storage-provisioner [ed1b9ebe2601] ...
	I0916 04:14:40.372390    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed1b9ebe2601"
	I0916 04:14:40.383413    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:14:40.383426    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:14:40.419426    4792 logs.go:123] Gathering logs for kube-apiserver [99668d812a17] ...
	I0916 04:14:40.419437    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99668d812a17"
	I0916 04:14:40.433205    4792 logs.go:123] Gathering logs for kube-apiserver [40ab2f675d22] ...
	I0916 04:14:40.433220    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40ab2f675d22"
	I0916 04:14:42.971446    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:47.973993    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:14:47.974075    4792 kubeadm.go:597] duration metric: took 4m3.9805465s to restartPrimaryControlPlane
	W0916 04:14:47.974157    4792 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 04:14:47.974192    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 04:14:49.012088    4792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037904542s)
	I0916 04:14:49.012179    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 04:14:49.017144    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 04:14:49.020293    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 04:14:49.023121    4792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 04:14:49.023127    4792 kubeadm.go:157] found existing configuration files:
	
	I0916 04:14:49.023151    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf
	I0916 04:14:49.025569    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 04:14:49.025605    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 04:14:49.028846    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf
	I0916 04:14:49.032107    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 04:14:49.032159    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 04:14:49.035440    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf
	I0916 04:14:49.038254    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 04:14:49.038302    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 04:14:49.041178    4792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf
	I0916 04:14:49.044581    4792 kubeadm.go:163] "https://control-plane.minikube.internal:50516" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50516 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 04:14:49.044620    4792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 04:14:49.047700    4792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 04:14:49.066537    4792 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 04:14:49.066566    4792 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 04:14:49.116399    4792 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 04:14:49.116452    4792 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 04:14:49.116501    4792 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 04:14:49.166692    4792 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 04:14:49.174808    4792 out.go:235]   - Generating certificates and keys ...
	I0916 04:14:49.174843    4792 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 04:14:49.174876    4792 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 04:14:49.174923    4792 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 04:14:49.174956    4792 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 04:14:49.174991    4792 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 04:14:49.175020    4792 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 04:14:49.175058    4792 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 04:14:49.175097    4792 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 04:14:49.175136    4792 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 04:14:49.175176    4792 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 04:14:49.175197    4792 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 04:14:49.175238    4792 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 04:14:49.250395    4792 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 04:14:49.310547    4792 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 04:14:49.395080    4792 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 04:14:49.425345    4792 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 04:14:49.455496    4792 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 04:14:49.455803    4792 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 04:14:49.455859    4792 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 04:14:49.541588    4792 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 04:14:49.544729    4792 out.go:235]   - Booting up control plane ...
	I0916 04:14:49.544780    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 04:14:49.544830    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 04:14:49.544866    4792 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 04:14:49.544931    4792 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 04:14:49.545099    4792 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 04:14:53.543834    4792 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.000972 seconds
	I0916 04:14:53.543901    4792 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 04:14:53.548672    4792 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 04:14:54.057541    4792 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 04:14:54.057687    4792 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-716000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 04:14:54.561209    4792 kubeadm.go:310] [bootstrap-token] Using token: in0fi5.n676jkcsrk0svadq
	I0916 04:14:54.565481    4792 out.go:235]   - Configuring RBAC rules ...
	I0916 04:14:54.565545    4792 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 04:14:54.565593    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 04:14:54.571532    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 04:14:54.572535    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 04:14:54.573378    4792 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 04:14:54.574188    4792 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 04:14:54.577148    4792 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 04:14:54.751944    4792 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 04:14:54.964642    4792 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 04:14:54.965064    4792 kubeadm.go:310] 
	I0916 04:14:54.965097    4792 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 04:14:54.965102    4792 kubeadm.go:310] 
	I0916 04:14:54.965142    4792 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 04:14:54.965147    4792 kubeadm.go:310] 
	I0916 04:14:54.965163    4792 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 04:14:54.965189    4792 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 04:14:54.965214    4792 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 04:14:54.965218    4792 kubeadm.go:310] 
	I0916 04:14:54.965265    4792 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 04:14:54.965270    4792 kubeadm.go:310] 
	I0916 04:14:54.965294    4792 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 04:14:54.965297    4792 kubeadm.go:310] 
	I0916 04:14:54.965330    4792 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 04:14:54.965368    4792 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 04:14:54.965407    4792 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 04:14:54.965410    4792 kubeadm.go:310] 
	I0916 04:14:54.965452    4792 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 04:14:54.965499    4792 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 04:14:54.965502    4792 kubeadm.go:310] 
	I0916 04:14:54.965539    4792 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token in0fi5.n676jkcsrk0svadq \
	I0916 04:14:54.965592    4792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 \
	I0916 04:14:54.965603    4792 kubeadm.go:310] 	--control-plane 
	I0916 04:14:54.965607    4792 kubeadm.go:310] 
	I0916 04:14:54.965656    4792 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 04:14:54.965669    4792 kubeadm.go:310] 
	I0916 04:14:54.965725    4792 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token in0fi5.n676jkcsrk0svadq \
	I0916 04:14:54.965776    4792 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd10c29c679a7875936030b69b87136fbf1dbd0c06d390de7566972b8cb65116 
	I0916 04:14:54.965942    4792 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 04:14:54.965951    4792 cni.go:84] Creating CNI manager for ""
	I0916 04:14:54.965960    4792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:14:54.968602    4792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 04:14:54.975743    4792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 04:14:54.978787    4792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 04:14:54.986730    4792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 04:14:54.986821    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 04:14:54.986822    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-716000 minikube.k8s.io/updated_at=2024_09_16T04_14_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=stopped-upgrade-716000 minikube.k8s.io/primary=true
	I0916 04:14:54.989954    4792 ops.go:34] apiserver oom_adj: -16
	I0916 04:14:55.036111    4792 kubeadm.go:1113] duration metric: took 49.361416ms to wait for elevateKubeSystemPrivileges
	I0916 04:14:55.036127    4792 kubeadm.go:394] duration metric: took 4m11.056594291s to StartCluster
	I0916 04:14:55.036137    4792 settings.go:142] acquiring lock: {Name:mk9072b559308de66cf3dabb49aa5dd0b6d18e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:14:55.036232    4792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:14:55.036639    4792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/kubeconfig: {Name:mk6c71bad5b7ea3c1178ed072ac13c9e3d1e147d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:14:55.036854    4792 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:14:55.036907    4792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 04:14:55.036943    4792 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-716000"
	I0916 04:14:55.036952    4792 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-716000"
	W0916 04:14:55.036983    4792 addons.go:243] addon storage-provisioner should already be in state true
	I0916 04:14:55.036996    4792 host.go:66] Checking if "stopped-upgrade-716000" exists ...
	I0916 04:14:55.036992    4792 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-716000"
	I0916 04:14:55.037012    4792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-716000"
	I0916 04:14:55.036982    4792 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:14:55.037464    4792 retry.go:31] will retry after 890.279497ms: connect: dial unix /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/monitor: connect: connection refused
	I0916 04:14:55.038408    4792 kapi.go:59] client config for stopped-upgrade-716000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/stopped-upgrade-716000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 04:14:55.038545    4792 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-716000"
	W0916 04:14:55.038551    4792 addons.go:243] addon default-storageclass should already be in state true
	I0916 04:14:55.038560    4792 host.go:66] Checking if "stopped-upgrade-716000" exists ...
	I0916 04:14:55.039127    4792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 04:14:55.039133    4792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 04:14:55.039140    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:14:55.040760    4792 out.go:177] * Verifying Kubernetes components...
	I0916 04:14:55.048769    4792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 04:14:55.136939    4792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 04:14:55.142647    4792 api_server.go:52] waiting for apiserver process to appear ...
	I0916 04:14:55.142720    4792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 04:14:55.147066    4792 api_server.go:72] duration metric: took 110.200916ms to wait for apiserver process to appear ...
	I0916 04:14:55.147076    4792 api_server.go:88] waiting for apiserver healthz status ...
	I0916 04:14:55.147086    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:14:55.179217    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 04:14:55.496969    4792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 04:14:55.496984    4792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 04:14:55.934603    4792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 04:14:55.938671    4792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:14:55.938680    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 04:14:55.938692    4792 sshutil.go:53] new ssh client: &{IP:localhost Port:50481 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/stopped-upgrade-716000/id_rsa Username:docker}
	I0916 04:14:55.972222    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 04:15:00.149065    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:00.149094    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:05.149205    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:05.149228    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:10.149387    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:10.149417    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:15.149691    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:15.149716    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:20.150114    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:20.150152    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:25.150707    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:25.150733    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 04:15:25.498654    4792 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 04:15:25.502869    4792 out.go:177] * Enabled addons: storage-provisioner
	I0916 04:15:25.509703    4792 addons.go:510] duration metric: took 30.473417625s for enable addons: enabled=[storage-provisioner]
	I0916 04:15:30.151433    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:30.151485    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:35.153009    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:35.153067    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:40.154402    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:40.154463    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:45.156131    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:45.156156    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:50.158322    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:50.158376    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:15:55.160680    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:15:55.160836    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:15:55.178713    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:15:55.178811    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:15:55.195332    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:15:55.195409    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:15:55.205664    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:15:55.205746    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:15:55.215924    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:15:55.216008    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:15:55.226072    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:15:55.226146    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:15:55.236379    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:15:55.236462    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:15:55.246210    4792 logs.go:276] 0 containers: []
	W0916 04:15:55.246220    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:15:55.246281    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:15:55.256359    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:15:55.256374    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:15:55.256380    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:15:55.273896    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:15:55.273907    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:15:55.285389    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:15:55.285400    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:15:55.325207    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:15:55.325220    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:15:55.339566    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:15:55.339577    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:15:55.351065    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:15:55.351079    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:15:55.362322    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:15:55.362336    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:15:55.374164    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:15:55.374173    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:15:55.388358    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:15:55.388367    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:15:55.425969    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:15:55.425977    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:15:55.430238    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:15:55.430245    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:15:55.444682    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:15:55.444695    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:15:55.459997    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:15:55.460010    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:15:57.985479    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:02.988044    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:02.988301    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:03.010500    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:03.010615    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:03.025450    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:03.025546    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:03.040249    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:03.040329    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:03.051098    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:03.051177    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:03.061410    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:03.061493    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:03.071973    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:03.072049    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:03.081788    4792 logs.go:276] 0 containers: []
	W0916 04:16:03.081800    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:03.081860    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:03.092627    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:03.092641    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:03.092646    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:03.106469    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:03.106480    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:03.117907    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:03.117920    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:03.132678    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:03.132690    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:03.144259    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:03.144270    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:03.169033    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:03.169040    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:03.206775    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:03.206784    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:03.210968    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:03.210977    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:03.245473    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:03.245486    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:03.256951    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:03.256960    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:03.268573    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:03.268584    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:03.282483    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:03.282496    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:03.294297    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:03.294306    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:05.813676    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:10.816412    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:10.817126    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:10.862069    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:10.862224    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:10.882272    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:10.882380    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:10.901522    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:10.901617    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:10.912923    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:10.912996    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:10.923536    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:10.923619    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:10.934273    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:10.934349    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:10.944503    4792 logs.go:276] 0 containers: []
	W0916 04:16:10.944514    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:10.944582    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:10.958727    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:10.958740    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:10.958746    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:10.993109    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:10.993119    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:11.007750    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:11.007758    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:11.045675    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:11.045683    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:11.050232    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:11.050239    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:11.069270    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:11.069279    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:11.080880    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:11.080892    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:11.092518    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:11.092531    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:11.107457    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:11.107467    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:11.118885    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:11.118896    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:11.136179    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:11.136189    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:11.147099    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:11.147108    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:11.171463    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:11.171470    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:13.684466    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:18.687191    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:18.687778    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:18.724827    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:18.724995    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:18.747092    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:18.747220    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:18.762670    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:18.762760    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:18.775232    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:18.775317    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:18.786348    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:18.786423    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:18.796712    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:18.796775    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:18.807156    4792 logs.go:276] 0 containers: []
	W0916 04:16:18.807168    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:18.807239    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:18.817188    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:18.817202    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:18.817208    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:18.853898    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:18.853910    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:18.867755    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:18.867769    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:18.879324    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:18.879335    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:18.894587    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:18.894601    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:18.911234    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:18.911246    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:18.928606    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:18.928619    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:18.939632    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:18.939643    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:18.975446    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:18.975457    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:18.979666    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:18.979673    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:18.993496    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:18.993508    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:19.004874    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:19.004885    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:19.016223    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:19.016235    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:21.542824    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:26.545238    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:26.545474    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:26.570221    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:26.570352    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:26.591915    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:26.592021    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:26.603875    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:26.603946    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:26.614811    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:26.614900    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:26.625584    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:26.625661    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:26.635752    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:26.635830    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:26.645692    4792 logs.go:276] 0 containers: []
	W0916 04:16:26.645709    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:26.645782    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:26.660376    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:26.660395    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:26.660401    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:26.676192    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:26.676203    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:26.690710    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:26.690721    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:26.702210    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:26.702219    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:26.726653    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:26.726660    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:26.730494    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:26.730503    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:26.743967    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:26.743976    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:26.755009    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:26.755023    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:26.771854    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:26.771865    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:26.791281    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:26.791291    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:26.802601    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:26.802613    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:26.838417    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:26.838435    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:26.873980    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:26.873994    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:29.390147    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:34.392784    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:34.393290    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:34.435419    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:34.435582    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:34.456960    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:34.457076    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:34.471786    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:34.471862    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:34.484985    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:34.485074    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:34.496106    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:34.496184    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:34.506470    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:34.506551    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:34.516376    4792 logs.go:276] 0 containers: []
	W0916 04:16:34.516392    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:34.516462    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:34.526652    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:34.526668    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:34.526674    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:34.573816    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:34.573831    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:34.594840    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:34.594851    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:34.606625    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:34.606639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:34.618065    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:34.618079    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:34.629291    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:34.629302    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:34.652683    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:34.652693    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:34.663943    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:34.663953    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:34.701584    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:34.701593    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:34.705816    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:34.705825    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:34.719613    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:34.719624    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:34.735303    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:34.735313    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:34.753111    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:34.753122    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:37.266440    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:42.268146    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:42.268670    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:42.308824    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:42.308992    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:42.334777    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:42.334919    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:42.348965    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:42.349049    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:42.361306    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:42.361392    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:42.372229    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:42.372304    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:42.386746    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:42.386836    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:42.396820    4792 logs.go:276] 0 containers: []
	W0916 04:16:42.396839    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:42.396907    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:42.407509    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:42.407525    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:42.407531    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:42.418566    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:42.418579    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:42.454439    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:42.454449    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:42.490736    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:42.490750    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:42.505688    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:42.505702    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:42.517585    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:42.517595    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:42.528968    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:42.528977    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:42.544339    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:42.544349    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:42.559474    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:42.559485    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:42.584060    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:42.584068    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:42.595636    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:42.595650    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:42.599821    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:42.599830    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:42.613720    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:42.613730    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:45.137203    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:50.139521    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:50.140055    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:50.179865    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:50.180010    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:50.201309    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:50.201417    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:50.216929    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:50.217003    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:50.229518    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:50.229604    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:50.247828    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:50.247905    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:50.258139    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:50.258220    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:50.268656    4792 logs.go:276] 0 containers: []
	W0916 04:16:50.268666    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:50.268727    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:50.278926    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:50.278943    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:50.278949    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:50.302565    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:50.302576    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:50.317958    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:50.317971    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:16:50.335709    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:50.335718    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:50.346655    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:50.346668    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:50.369843    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:50.369853    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:50.406742    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:50.406751    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:50.410836    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:50.410842    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:50.424821    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:50.424831    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:50.436236    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:50.436246    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:50.447720    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:50.447729    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:50.483125    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:50.483136    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:50.495381    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:50.495392    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:53.009453    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:16:58.011626    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:16:58.011703    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:16:58.022669    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:16:58.022743    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:16:58.032761    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:16:58.032856    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:16:58.044509    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:16:58.044565    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:16:58.055356    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:16:58.055414    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:16:58.065502    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:16:58.065562    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:16:58.076630    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:16:58.076695    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:16:58.087215    4792 logs.go:276] 0 containers: []
	W0916 04:16:58.087224    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:16:58.087279    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:16:58.098927    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:16:58.098944    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:16:58.098950    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:16:58.137173    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:16:58.137193    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:16:58.151177    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:16:58.151191    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:16:58.172264    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:16:58.172279    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:16:58.185997    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:16:58.186010    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:16:58.198593    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:16:58.198611    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:16:58.212315    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:16:58.212333    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:16:58.237043    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:16:58.237063    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:16:58.242178    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:16:58.242191    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:16:58.278351    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:16:58.278362    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:16:58.293677    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:16:58.293688    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:16:58.308171    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:16:58.308180    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:16:58.320298    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:16:58.320309    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:00.840317    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:05.842549    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:05.842628    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:05.854208    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:05.854288    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:05.864450    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:05.864530    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:05.875609    4792 logs.go:276] 2 containers: [b3d786d9e441 94da14967167]
	I0916 04:17:05.875684    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:05.892153    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:05.892237    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:05.903108    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:05.903192    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:05.912894    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:05.912963    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:05.923961    4792 logs.go:276] 0 containers: []
	W0916 04:17:05.923971    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:05.924031    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:05.933850    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:05.933867    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:05.933873    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:05.953811    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:05.953823    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:05.967649    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:05.967659    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:05.982567    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:05.982577    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:05.994144    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:05.994154    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:06.011348    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:06.011357    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:06.022530    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:06.022543    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:06.060441    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:06.060451    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:06.064638    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:06.064646    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:06.076436    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:06.076449    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:06.100323    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:06.100331    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:06.118926    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:06.118935    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:06.154316    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:06.154332    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:08.667934    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:13.669898    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:13.670459    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:13.708121    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:13.708275    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:13.729148    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:13.729256    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:13.744168    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:13.744258    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:13.756585    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:13.756656    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:13.767070    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:13.767157    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:13.777568    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:13.777638    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:13.787785    4792 logs.go:276] 0 containers: []
	W0916 04:17:13.787799    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:13.787863    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:13.800034    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:13.800049    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:13.800054    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:13.814778    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:13.814788    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:13.826131    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:13.826143    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:13.838167    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:13.838179    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:13.851681    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:13.851691    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:13.855853    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:13.855863    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:13.890138    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:13.890151    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:13.905103    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:13.905116    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:13.918836    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:13.918845    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:13.936608    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:13.936619    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:13.960752    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:13.960765    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:13.998554    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:13.998561    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:14.010432    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:14.010443    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:14.022194    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:14.022206    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:14.040222    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:14.040233    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:16.561207    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:21.563695    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:21.563764    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:21.575265    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:21.575344    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:21.587098    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:21.587171    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:21.598796    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:21.598877    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:21.609351    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:21.609411    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:21.620672    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:21.620746    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:21.631992    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:21.632063    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:21.644798    4792 logs.go:276] 0 containers: []
	W0916 04:17:21.644807    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:21.644854    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:21.657198    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:21.657214    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:21.657222    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:21.670584    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:21.670600    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:21.683623    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:21.683638    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:21.698228    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:21.698238    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:21.713125    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:21.713135    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:21.729082    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:21.729093    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:21.754665    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:21.754681    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:21.759678    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:21.759688    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:21.798519    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:21.798530    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:21.817039    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:21.817049    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:21.829909    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:21.829921    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:21.849684    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:21.849696    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:21.863683    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:21.863692    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:21.882340    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:21.882353    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:21.921141    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:21.921160    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:24.437495    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:29.440282    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:29.440873    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:29.478962    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:29.479131    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:29.499144    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:29.499269    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:29.514479    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:29.514577    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:29.526732    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:29.526818    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:29.537321    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:29.537403    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:29.553612    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:29.553681    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:29.568272    4792 logs.go:276] 0 containers: []
	W0916 04:17:29.568284    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:29.568346    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:29.579510    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:29.579533    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:29.579539    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:29.591343    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:29.591353    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:29.606591    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:29.606599    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:29.618042    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:29.618058    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:29.629519    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:29.629529    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:29.641308    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:29.641319    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:29.666078    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:29.666085    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:29.677319    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:29.677327    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:29.714299    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:29.714317    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:29.733591    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:29.733610    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:29.749710    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:29.749725    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:29.762997    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:29.763009    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:29.778233    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:29.778247    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:29.791958    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:29.791974    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:29.833525    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:29.833543    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:32.340301    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:37.342525    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:37.343094    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:37.382458    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:37.382675    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:37.404931    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:37.405045    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:37.420034    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:37.420129    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:37.431966    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:37.432044    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:37.447438    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:37.447518    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:37.457721    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:37.457802    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:37.468209    4792 logs.go:276] 0 containers: []
	W0916 04:17:37.468226    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:37.468294    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:37.482599    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:37.482616    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:37.482622    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:37.497386    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:37.497397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:37.509019    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:37.509029    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:37.520269    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:37.520279    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:37.536092    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:37.536104    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:37.575078    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:37.575090    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:37.579337    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:37.579342    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:37.597796    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:37.597805    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:37.611460    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:37.611469    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:37.646502    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:37.646511    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:37.663854    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:37.663865    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:37.681549    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:37.681561    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:37.706632    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:37.706639    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:37.718184    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:37.718194    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:37.731795    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:37.731806    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:40.245653    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:45.248394    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:45.248503    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:45.264465    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:45.264542    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:45.276832    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:45.276895    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:45.292831    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:45.292901    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:45.304578    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:45.304650    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:45.316890    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:45.316990    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:45.328402    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:45.328468    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:45.338916    4792 logs.go:276] 0 containers: []
	W0916 04:17:45.338929    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:45.338990    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:45.350269    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:45.350287    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:45.350293    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:45.389311    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:45.389329    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:45.428372    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:45.428384    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:45.445503    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:45.445516    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:45.459330    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:45.459343    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:45.471153    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:45.471165    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:45.487904    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:45.487916    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:45.515302    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:45.515313    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:45.528288    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:45.528299    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:45.544519    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:45.544529    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:45.556071    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:45.556081    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:45.568795    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:45.568806    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:45.583088    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:45.583101    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:45.588085    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:45.588094    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:45.603406    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:45.603418    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:48.124188    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:17:53.126959    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:17:53.127535    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:17:53.167101    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:17:53.167255    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:17:53.188872    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:17:53.188994    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:17:53.204412    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:17:53.204504    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:17:53.217107    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:17:53.217192    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:17:53.228042    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:17:53.228120    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:17:53.239243    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:17:53.239326    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:17:53.250831    4792 logs.go:276] 0 containers: []
	W0916 04:17:53.250844    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:17:53.250917    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:17:53.261256    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:17:53.261274    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:17:53.261281    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:17:53.296129    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:17:53.296142    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:17:53.312946    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:17:53.312962    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:17:53.324846    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:17:53.324858    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:17:53.346637    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:17:53.346650    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:17:53.367780    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:17:53.367789    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:17:53.381111    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:17:53.381121    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:17:53.396404    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:17:53.396415    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:17:53.401105    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:17:53.401113    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:17:53.415980    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:17:53.415989    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:17:53.433168    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:17:53.433180    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:17:53.444626    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:17:53.444636    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:17:53.469078    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:17:53.469084    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:17:53.505373    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:17:53.505380    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:17:53.516715    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:17:53.516727    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:17:56.029093    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:01.031816    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:01.032191    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:01.063531    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:01.063672    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:01.081848    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:01.081953    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:01.095788    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:01.095872    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:01.107594    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:01.107675    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:01.118343    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:01.118416    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:01.128687    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:01.128763    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:01.138878    4792 logs.go:276] 0 containers: []
	W0916 04:18:01.138891    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:01.138954    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:01.148810    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:01.148830    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:01.148836    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:01.160687    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:01.160701    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:01.178232    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:01.178243    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:01.189845    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:01.189859    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:01.201393    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:01.201405    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:01.213174    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:01.213185    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:01.251324    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:01.251332    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:01.255242    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:01.255248    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:01.268152    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:01.268164    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:01.283804    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:01.283813    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:01.297667    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:01.297680    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:01.312812    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:01.312825    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:01.330206    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:01.330216    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:01.369677    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:01.369689    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:01.381139    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:01.381153    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:03.906922    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:08.909508    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:08.909584    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:08.920437    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:08.920518    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:08.931144    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:08.931231    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:08.942565    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:08.942656    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:08.954145    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:08.954221    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:08.965814    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:08.965901    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:08.976832    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:08.976911    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:08.987676    4792 logs.go:276] 0 containers: []
	W0916 04:18:08.987687    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:08.987746    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:09.003719    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:09.003740    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:09.003747    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:09.017820    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:09.017834    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:09.029980    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:09.029994    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:09.047427    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:09.047439    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:09.060779    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:09.060792    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:09.075772    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:09.075784    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:09.089026    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:09.089039    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:09.095096    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:09.095104    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:09.136347    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:09.136360    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:09.153325    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:09.153341    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:09.172108    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:09.172122    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:09.185472    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:09.185483    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:09.211662    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:09.211672    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:09.250305    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:09.250316    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:09.262613    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:09.262624    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:11.778134    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:16.780918    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:16.781533    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:16.828758    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:16.828897    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:16.853857    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:16.853973    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:16.877800    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:16.877874    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:16.891396    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:16.891480    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:16.902066    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:16.902152    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:16.913141    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:16.913209    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:16.924124    4792 logs.go:276] 0 containers: []
	W0916 04:18:16.924133    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:16.924190    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:16.934831    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:16.934848    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:16.934854    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:16.939222    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:16.939229    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:16.951392    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:16.951409    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:16.966622    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:16.966633    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:16.978275    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:16.978288    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:17.014832    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:17.014843    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:17.027151    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:17.027165    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:17.043766    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:17.043776    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:17.058160    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:17.058173    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:17.069291    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:17.069305    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:17.081065    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:17.081075    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:17.098692    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:17.098703    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:17.135715    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:17.135722    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:17.147629    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:17.147639    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:17.161236    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:17.161246    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:19.687713    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:24.690428    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:24.690726    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:24.718519    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:24.718645    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:24.736466    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:24.736548    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:24.750708    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:24.750799    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:24.762678    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:24.762760    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:24.776524    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:24.776607    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:24.788844    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:24.788923    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:24.798960    4792 logs.go:276] 0 containers: []
	W0916 04:18:24.798973    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:24.799194    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:24.810168    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:24.810187    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:24.810194    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:24.821570    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:24.821583    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:24.835243    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:24.835258    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:24.852719    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:24.852730    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:24.864490    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:24.864499    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:24.903490    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:24.903501    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:24.918099    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:24.918109    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:24.933020    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:24.933031    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:24.944979    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:24.944990    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:24.960197    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:24.960206    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:24.997427    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:24.997433    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:25.001676    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:25.001683    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:25.024234    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:25.024239    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:25.036768    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:25.036777    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:25.049152    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:25.049161    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:27.572631    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:32.575198    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:32.575295    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:32.586349    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:32.586436    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:32.598177    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:32.598258    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:32.609778    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:32.609876    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:32.621283    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:32.621442    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:32.638593    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:32.638669    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:32.649914    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:32.649991    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:32.663220    4792 logs.go:276] 0 containers: []
	W0916 04:18:32.663232    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:32.663299    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:32.674629    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:32.674649    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:32.674655    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:32.679500    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:32.679513    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:32.692016    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:32.692028    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:32.706667    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:32.706680    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:32.719978    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:32.719993    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:32.738699    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:32.738712    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:32.764620    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:32.764641    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:32.801361    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:32.801375    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:32.814328    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:32.814341    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:32.830387    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:32.830402    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:32.875747    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:32.875764    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:32.893189    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:32.893203    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:32.905292    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:32.905301    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:32.917525    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:32.917538    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:32.929757    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:32.929769    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:35.445263    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:40.447628    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:40.448162    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:40.484734    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:40.484890    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:40.508344    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:40.508480    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:40.522984    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:40.523075    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:40.534648    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:40.534730    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:40.545422    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:40.545499    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:40.555768    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:40.555832    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:40.565970    4792 logs.go:276] 0 containers: []
	W0916 04:18:40.565982    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:40.566050    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:40.576765    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:40.576783    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:40.576789    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:40.613058    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:40.613067    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:40.630119    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:40.630133    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:40.645600    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:40.645611    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:40.657306    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:40.657315    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:40.681845    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:40.681858    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:40.716143    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:40.716155    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:40.733682    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:40.733693    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:40.746373    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:40.746384    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:40.758136    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:40.758146    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:40.772722    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:40.772732    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:40.777262    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:40.777267    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:40.789069    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:40.789079    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:40.800666    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:40.800681    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:40.822356    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:40.822367    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:43.336897    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:48.339691    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:48.340161    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 04:18:48.374903    4792 logs.go:276] 1 containers: [48042a3d7cbc]
	I0916 04:18:48.375054    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 04:18:48.396088    4792 logs.go:276] 1 containers: [2f5a5e7b9e98]
	I0916 04:18:48.396198    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 04:18:48.410806    4792 logs.go:276] 4 containers: [d8cdcec0bb63 aee713dc0b6b b3d786d9e441 94da14967167]
	I0916 04:18:48.410886    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 04:18:48.422617    4792 logs.go:276] 1 containers: [fbdd30ce59a6]
	I0916 04:18:48.422700    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 04:18:48.433872    4792 logs.go:276] 1 containers: [bfb8e053b87d]
	I0916 04:18:48.433955    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 04:18:48.446338    4792 logs.go:276] 1 containers: [2127fa67c447]
	I0916 04:18:48.446413    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 04:18:48.456696    4792 logs.go:276] 0 containers: []
	W0916 04:18:48.456711    4792 logs.go:278] No container was found matching "kindnet"
	I0916 04:18:48.456776    4792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 04:18:48.467213    4792 logs.go:276] 1 containers: [78fb9b152d66]
	I0916 04:18:48.467232    4792 logs.go:123] Gathering logs for container status ...
	I0916 04:18:48.467238    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 04:18:48.479021    4792 logs.go:123] Gathering logs for dmesg ...
	I0916 04:18:48.479030    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 04:18:48.483808    4792 logs.go:123] Gathering logs for describe nodes ...
	I0916 04:18:48.483817    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 04:18:48.519427    4792 logs.go:123] Gathering logs for kube-apiserver [48042a3d7cbc] ...
	I0916 04:18:48.519441    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48042a3d7cbc"
	I0916 04:18:48.538517    4792 logs.go:123] Gathering logs for Docker ...
	I0916 04:18:48.538525    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 04:18:48.563235    4792 logs.go:123] Gathering logs for coredns [d8cdcec0bb63] ...
	I0916 04:18:48.563251    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8cdcec0bb63"
	I0916 04:18:48.575283    4792 logs.go:123] Gathering logs for coredns [94da14967167] ...
	I0916 04:18:48.575296    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94da14967167"
	I0916 04:18:48.586613    4792 logs.go:123] Gathering logs for coredns [b3d786d9e441] ...
	I0916 04:18:48.586626    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d786d9e441"
	I0916 04:18:48.597991    4792 logs.go:123] Gathering logs for kube-scheduler [fbdd30ce59a6] ...
	I0916 04:18:48.598002    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbdd30ce59a6"
	I0916 04:18:48.613014    4792 logs.go:123] Gathering logs for kube-proxy [bfb8e053b87d] ...
	I0916 04:18:48.613024    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb8e053b87d"
	I0916 04:18:48.624389    4792 logs.go:123] Gathering logs for storage-provisioner [78fb9b152d66] ...
	I0916 04:18:48.624397    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78fb9b152d66"
	I0916 04:18:48.635421    4792 logs.go:123] Gathering logs for kubelet ...
	I0916 04:18:48.635431    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 04:18:48.673048    4792 logs.go:123] Gathering logs for etcd [2f5a5e7b9e98] ...
	I0916 04:18:48.673058    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f5a5e7b9e98"
	I0916 04:18:48.687133    4792 logs.go:123] Gathering logs for coredns [aee713dc0b6b] ...
	I0916 04:18:48.687145    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee713dc0b6b"
	I0916 04:18:48.698448    4792 logs.go:123] Gathering logs for kube-controller-manager [2127fa67c447] ...
	I0916 04:18:48.698460    4792 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2127fa67c447"
	I0916 04:18:51.218442    4792 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 04:18:56.221215    4792 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 04:18:56.226437    4792 out.go:201] 
	W0916 04:18:56.230458    4792 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 04:18:56.230485    4792 out.go:270] * 
	* 
	W0916 04:18:56.232688    4792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:56.242303    4792 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-716000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.97s)

                                                
                                    
x
+
TestPause/serial/Start (10.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-881000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-881000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.118352083s)

                                                
                                                
-- stdout --
	* [pause-881000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-881000" primary control-plane node in "pause-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-881000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-881000 -n pause-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-881000 -n pause-881000: exit status 7 (50.357916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 
E0916 04:16:00.250857    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 : exit status 80 (9.924844s)

                                                
                                                
-- stdout --
	* [NoKubernetes-596000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-596000" primary control-plane node in "NoKubernetes-596000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-596000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-596000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000: exit status 7 (52.097833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243423584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-596000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-596000
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-596000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000: exit status 7 (45.749291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 : exit status 80 (5.237593625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-596000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-596000
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-596000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000: exit status 7 (60.522667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 : exit status 80 (5.252525666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-596000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-596000
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-596000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-596000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-596000 -n NoKubernetes-596000: exit status 7 (64.956417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.86143025s)

                                                
                                                
-- stdout --
	* [auto-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-725000" primary control-plane node in "auto-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:16:57.979469    5027 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:16:57.979614    5027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:16:57.979618    5027 out.go:358] Setting ErrFile to fd 2...
	I0916 04:16:57.979620    5027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:16:57.979751    5027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:16:57.980828    5027 out.go:352] Setting JSON to false
	I0916 04:16:57.997373    5027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4580,"bootTime":1726480837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:16:57.997437    5027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:16:58.003491    5027 out.go:177] * [auto-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:16:58.010481    5027 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:16:58.010528    5027 notify.go:220] Checking for updates...
	I0916 04:16:58.017368    5027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:16:58.020396    5027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:16:58.023427    5027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:16:58.026433    5027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:16:58.029383    5027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:16:58.032849    5027 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:16:58.032923    5027 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:16:58.032973    5027 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:16:58.035251    5027 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:16:58.042411    5027 start.go:297] selected driver: qemu2
	I0916 04:16:58.042421    5027 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:16:58.042433    5027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:16:58.045069    5027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:16:58.046283    5027 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:16:58.049453    5027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:16:58.049475    5027 cni.go:84] Creating CNI manager for ""
	I0916 04:16:58.049498    5027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:16:58.049507    5027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:16:58.049544    5027 start.go:340] cluster config:
	{Name:auto-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:16:58.053573    5027 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:16:58.060329    5027 out.go:177] * Starting "auto-725000" primary control-plane node in "auto-725000" cluster
	I0916 04:16:58.064380    5027 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:16:58.064411    5027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:16:58.064420    5027 cache.go:56] Caching tarball of preloaded images
	I0916 04:16:58.064509    5027 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:16:58.064515    5027 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:16:58.064580    5027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/auto-725000/config.json ...
	I0916 04:16:58.064591    5027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/auto-725000/config.json: {Name:mk198b165d8a4ad651030d1cc7c50f009f74f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:16:58.065000    5027 start.go:360] acquireMachinesLock for auto-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:16:58.065041    5027 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "auto-725000"
	I0916 04:16:58.065053    5027 start.go:93] Provisioning new machine with config: &{Name:auto-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:16:58.065089    5027 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:16:58.069326    5027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:16:58.085855    5027 start.go:159] libmachine.API.Create for "auto-725000" (driver="qemu2")
	I0916 04:16:58.085892    5027 client.go:168] LocalClient.Create starting
	I0916 04:16:58.085964    5027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:16:58.086001    5027 main.go:141] libmachine: Decoding PEM data...
	I0916 04:16:58.086010    5027 main.go:141] libmachine: Parsing certificate...
	I0916 04:16:58.086046    5027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:16:58.086070    5027 main.go:141] libmachine: Decoding PEM data...
	I0916 04:16:58.086078    5027 main.go:141] libmachine: Parsing certificate...
	I0916 04:16:58.086475    5027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:16:58.248131    5027 main.go:141] libmachine: Creating SSH key...
	I0916 04:16:58.314485    5027 main.go:141] libmachine: Creating Disk image...
	I0916 04:16:58.314495    5027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:16:58.314743    5027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:16:58.324830    5027 main.go:141] libmachine: STDOUT: 
	I0916 04:16:58.324864    5027 main.go:141] libmachine: STDERR: 
	I0916 04:16:58.324943    5027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2 +20000M
	I0916 04:16:58.334088    5027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:16:58.334108    5027 main.go:141] libmachine: STDERR: 
	I0916 04:16:58.334136    5027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:16:58.334141    5027 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:16:58.334168    5027 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:16:58.334194    5027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e3:1f:ce:ce:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:16:58.336071    5027 main.go:141] libmachine: STDOUT: 
	I0916 04:16:58.336086    5027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:16:58.336109    5027 client.go:171] duration metric: took 250.214666ms to LocalClient.Create
	I0916 04:17:00.338305    5027 start.go:128] duration metric: took 2.273222292s to createHost
	I0916 04:17:00.338376    5027 start.go:83] releasing machines lock for "auto-725000", held for 2.273369791s
	W0916 04:17:00.338442    5027 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:00.345959    5027 out.go:177] * Deleting "auto-725000" in qemu2 ...
	W0916 04:17:00.383665    5027 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:00.383696    5027 start.go:729] Will try again in 5 seconds ...
	I0916 04:17:05.385756    5027 start.go:360] acquireMachinesLock for auto-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:05.385943    5027 start.go:364] duration metric: took 146.75µs to acquireMachinesLock for "auto-725000"
	I0916 04:17:05.385989    5027 start.go:93] Provisioning new machine with config: &{Name:auto-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:05.386040    5027 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:05.394247    5027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:05.415911    5027 start.go:159] libmachine.API.Create for "auto-725000" (driver="qemu2")
	I0916 04:17:05.415943    5027 client.go:168] LocalClient.Create starting
	I0916 04:17:05.416012    5027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:05.416069    5027 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:05.416081    5027 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:05.416133    5027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:05.416163    5027 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:05.416173    5027 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:05.416511    5027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:05.579423    5027 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:05.748542    5027 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:05.748549    5027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:05.748772    5027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:17:05.758378    5027 main.go:141] libmachine: STDOUT: 
	I0916 04:17:05.758399    5027 main.go:141] libmachine: STDERR: 
	I0916 04:17:05.758472    5027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2 +20000M
	I0916 04:17:05.766431    5027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:05.766448    5027 main.go:141] libmachine: STDERR: 
	I0916 04:17:05.766458    5027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:17:05.766462    5027 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:05.766474    5027 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:05.766508    5027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:74:0c:9c:07:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/auto-725000/disk.qcow2
	I0916 04:17:05.768211    5027 main.go:141] libmachine: STDOUT: 
	I0916 04:17:05.768225    5027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:05.768239    5027 client.go:171] duration metric: took 352.299292ms to LocalClient.Create
	I0916 04:17:07.770465    5027 start.go:128] duration metric: took 2.384430125s to createHost
	I0916 04:17:07.770552    5027 start.go:83] releasing machines lock for "auto-725000", held for 2.384643917s
	W0916 04:17:07.770887    5027 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:07.780679    5027 out.go:201] 
	W0916 04:17:07.786827    5027 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:17:07.786886    5027 out.go:270] * 
	* 
	W0916 04:17:07.789892    5027 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:17:07.797546    5027 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.903014292s)

                                                
                                                
-- stdout --
	* [kindnet-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-725000" primary control-plane node in "kindnet-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:17:09.975298    5136 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:17:09.975435    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:09.975438    5136 out.go:358] Setting ErrFile to fd 2...
	I0916 04:17:09.975441    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:09.975560    5136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:17:09.976675    5136 out.go:352] Setting JSON to false
	I0916 04:17:09.992896    5136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4592,"bootTime":1726480837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:17:09.992962    5136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:17:09.999538    5136 out.go:177] * [kindnet-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:17:10.007453    5136 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:17:10.007517    5136 notify.go:220] Checking for updates...
	I0916 04:17:10.014434    5136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:17:10.017445    5136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:17:10.020489    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:17:10.023389    5136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:17:10.026434    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:17:10.029765    5136 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:17:10.029834    5136 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:17:10.029887    5136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:17:10.033394    5136 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:17:10.040455    5136 start.go:297] selected driver: qemu2
	I0916 04:17:10.040460    5136 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:17:10.040465    5136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:17:10.042626    5136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:17:10.045477    5136 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:17:10.048481    5136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:17:10.048497    5136 cni.go:84] Creating CNI manager for "kindnet"
	I0916 04:17:10.048500    5136 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 04:17:10.048527    5136 start.go:340] cluster config:
	{Name:kindnet-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:17:10.051889    5136 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:17:10.057432    5136 out.go:177] * Starting "kindnet-725000" primary control-plane node in "kindnet-725000" cluster
	I0916 04:17:10.061423    5136 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:17:10.061438    5136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:17:10.061451    5136 cache.go:56] Caching tarball of preloaded images
	I0916 04:17:10.061506    5136 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:17:10.061512    5136 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:17:10.061585    5136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kindnet-725000/config.json ...
	I0916 04:17:10.061597    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kindnet-725000/config.json: {Name:mk5be5c791d671d8c3760ab8883a90c3f7e09caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:17:10.061989    5136 start.go:360] acquireMachinesLock for kindnet-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:10.062028    5136 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "kindnet-725000"
	I0916 04:17:10.062037    5136 start.go:93] Provisioning new machine with config: &{Name:kindnet-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:10.062060    5136 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:10.065413    5136 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:10.081013    5136 start.go:159] libmachine.API.Create for "kindnet-725000" (driver="qemu2")
	I0916 04:17:10.081045    5136 client.go:168] LocalClient.Create starting
	I0916 04:17:10.081116    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:10.081147    5136 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:10.081171    5136 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:10.081210    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:10.081233    5136 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:10.081244    5136 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:10.081735    5136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:10.243714    5136 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:10.322080    5136 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:10.322086    5136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:10.322279    5136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:10.331273    5136 main.go:141] libmachine: STDOUT: 
	I0916 04:17:10.331376    5136 main.go:141] libmachine: STDERR: 
	I0916 04:17:10.331440    5136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2 +20000M
	I0916 04:17:10.339620    5136 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:10.339637    5136 main.go:141] libmachine: STDERR: 
	I0916 04:17:10.339659    5136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:10.339664    5136 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:10.339676    5136 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:10.339705    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:34:ff:fe:84:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:10.341265    5136 main.go:141] libmachine: STDOUT: 
	I0916 04:17:10.341280    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:10.341303    5136 client.go:171] duration metric: took 260.257125ms to LocalClient.Create
	I0916 04:17:12.343535    5136 start.go:128] duration metric: took 2.281489708s to createHost
	I0916 04:17:12.343623    5136 start.go:83] releasing machines lock for "kindnet-725000", held for 2.281630917s
	W0916 04:17:12.343677    5136 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:12.358041    5136 out.go:177] * Deleting "kindnet-725000" in qemu2 ...
	W0916 04:17:12.390170    5136 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:12.390201    5136 start.go:729] Will try again in 5 seconds ...
	I0916 04:17:17.392280    5136 start.go:360] acquireMachinesLock for kindnet-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:17.392895    5136 start.go:364] duration metric: took 498.667µs to acquireMachinesLock for "kindnet-725000"
	I0916 04:17:17.392997    5136 start.go:93] Provisioning new machine with config: &{Name:kindnet-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:17.393414    5136 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:17.400035    5136 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:17.450099    5136 start.go:159] libmachine.API.Create for "kindnet-725000" (driver="qemu2")
	I0916 04:17:17.450152    5136 client.go:168] LocalClient.Create starting
	I0916 04:17:17.450291    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:17.450377    5136 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:17.450399    5136 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:17.450477    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:17.450524    5136 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:17.450542    5136 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:17.451066    5136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:17.620690    5136 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:17.781230    5136 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:17.781241    5136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:17.781468    5136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:17.790972    5136 main.go:141] libmachine: STDOUT: 
	I0916 04:17:17.790994    5136 main.go:141] libmachine: STDERR: 
	I0916 04:17:17.791047    5136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2 +20000M
	I0916 04:17:17.798981    5136 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:17.798997    5136 main.go:141] libmachine: STDERR: 
	I0916 04:17:17.799017    5136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:17.799024    5136 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:17.799032    5136 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:17.799060    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:cf:0a:73:a5:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kindnet-725000/disk.qcow2
	I0916 04:17:17.800725    5136 main.go:141] libmachine: STDOUT: 
	I0916 04:17:17.800740    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:17.800752    5136 client.go:171] duration metric: took 350.601542ms to LocalClient.Create
	I0916 04:17:19.802920    5136 start.go:128] duration metric: took 2.409516041s to createHost
	I0916 04:17:19.803029    5136 start.go:83] releasing machines lock for "kindnet-725000", held for 2.410131917s
	W0916 04:17:19.803421    5136 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:19.816239    5136 out.go:201] 
	W0916 04:17:19.817992    5136 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:17:19.818012    5136 out.go:270] * 
	* 
	W0916 04:17:19.820231    5136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:17:19.832106    5136 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.988166458s)

                                                
                                                
-- stdout --
	* [calico-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-725000" primary control-plane node in "calico-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:17:22.124970    5249 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:17:22.125109    5249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:22.125112    5249 out.go:358] Setting ErrFile to fd 2...
	I0916 04:17:22.125115    5249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:22.125230    5249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:17:22.126310    5249 out.go:352] Setting JSON to false
	I0916 04:17:22.142608    5249 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4605,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:17:22.142691    5249 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:17:22.147956    5249 out.go:177] * [calico-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:17:22.155975    5249 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:17:22.156062    5249 notify.go:220] Checking for updates...
	I0916 04:17:22.162873    5249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:17:22.165909    5249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:17:22.168948    5249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:17:22.171934    5249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:17:22.174961    5249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:17:22.178235    5249 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:17:22.178296    5249 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:17:22.178365    5249 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:17:22.180803    5249 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:17:22.187916    5249 start.go:297] selected driver: qemu2
	I0916 04:17:22.187921    5249 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:17:22.187927    5249 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:17:22.190019    5249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:17:22.191286    5249 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:17:22.194039    5249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:17:22.194066    5249 cni.go:84] Creating CNI manager for "calico"
	I0916 04:17:22.194072    5249 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0916 04:17:22.194116    5249 start.go:340] cluster config:
	{Name:calico-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:17:22.197922    5249 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:17:22.204835    5249 out.go:177] * Starting "calico-725000" primary control-plane node in "calico-725000" cluster
	I0916 04:17:22.208960    5249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:17:22.208977    5249 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:17:22.208987    5249 cache.go:56] Caching tarball of preloaded images
	I0916 04:17:22.209054    5249 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:17:22.209060    5249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:17:22.209129    5249 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/calico-725000/config.json ...
	I0916 04:17:22.209141    5249 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/calico-725000/config.json: {Name:mkf15b98783bb5d4f591a81c863527438d083c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:17:22.209367    5249 start.go:360] acquireMachinesLock for calico-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:22.209398    5249 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "calico-725000"
	I0916 04:17:22.209408    5249 start.go:93] Provisioning new machine with config: &{Name:calico-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:22.209432    5249 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:22.217957    5249 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:22.234396    5249 start.go:159] libmachine.API.Create for "calico-725000" (driver="qemu2")
	I0916 04:17:22.234421    5249 client.go:168] LocalClient.Create starting
	I0916 04:17:22.234487    5249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:22.234519    5249 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:22.234528    5249 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:22.234563    5249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:22.234586    5249 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:22.234596    5249 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:22.234963    5249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:22.396660    5249 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:22.608044    5249 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:22.608056    5249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:22.613268    5249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:22.623642    5249 main.go:141] libmachine: STDOUT: 
	I0916 04:17:22.623665    5249 main.go:141] libmachine: STDERR: 
	I0916 04:17:22.623740    5249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2 +20000M
	I0916 04:17:22.631887    5249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:22.631903    5249 main.go:141] libmachine: STDERR: 
	I0916 04:17:22.631916    5249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:22.631920    5249 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:22.631935    5249 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:22.631964    5249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b8:05:ec:b8:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:22.633560    5249 main.go:141] libmachine: STDOUT: 
	I0916 04:17:22.633578    5249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:22.633601    5249 client.go:171] duration metric: took 399.1815ms to LocalClient.Create
	I0916 04:17:24.635793    5249 start.go:128] duration metric: took 2.426378917s to createHost
	I0916 04:17:24.635872    5249 start.go:83] releasing machines lock for "calico-725000", held for 2.426513875s
	W0916 04:17:24.635913    5249 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:24.645798    5249 out.go:177] * Deleting "calico-725000" in qemu2 ...
	W0916 04:17:24.674447    5249 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:24.674473    5249 start.go:729] Will try again in 5 seconds ...
	I0916 04:17:29.676478    5249 start.go:360] acquireMachinesLock for calico-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:29.676608    5249 start.go:364] duration metric: took 103.167µs to acquireMachinesLock for "calico-725000"
	I0916 04:17:29.676624    5249 start.go:93] Provisioning new machine with config: &{Name:calico-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:29.676656    5249 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:29.685815    5249 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:29.702145    5249 start.go:159] libmachine.API.Create for "calico-725000" (driver="qemu2")
	I0916 04:17:29.702179    5249 client.go:168] LocalClient.Create starting
	I0916 04:17:29.702248    5249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:29.702289    5249 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:29.702299    5249 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:29.702330    5249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:29.702354    5249 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:29.702361    5249 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:29.702671    5249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:29.965288    5249 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:30.026084    5249 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:30.026093    5249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:30.026294    5249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:30.035937    5249 main.go:141] libmachine: STDOUT: 
	I0916 04:17:30.035954    5249 main.go:141] libmachine: STDERR: 
	I0916 04:17:30.036012    5249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2 +20000M
	I0916 04:17:30.044219    5249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:30.044242    5249 main.go:141] libmachine: STDERR: 
	I0916 04:17:30.044260    5249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:30.044265    5249 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:30.044274    5249 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:30.044308    5249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:35:fa:41:c9:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/calico-725000/disk.qcow2
	I0916 04:17:30.045961    5249 main.go:141] libmachine: STDOUT: 
	I0916 04:17:30.045975    5249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:30.045987    5249 client.go:171] duration metric: took 343.809584ms to LocalClient.Create
	I0916 04:17:32.048010    5249 start.go:128] duration metric: took 2.371394917s to createHost
	I0916 04:17:32.048032    5249 start.go:83] releasing machines lock for "calico-725000", held for 2.371464125s
	W0916 04:17:32.048124    5249 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:32.058371    5249 out.go:201] 
	W0916 04:17:32.064361    5249 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:17:32.064369    5249 out.go:270] * 
	* 
	W0916 04:17:32.064962    5249 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:17:32.078303    5249 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.740404292s)

                                                
                                                
-- stdout --
	* [custom-flannel-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-725000" primary control-plane node in "custom-flannel-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:17:34.482418    5370 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:17:34.482560    5370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:34.482564    5370 out.go:358] Setting ErrFile to fd 2...
	I0916 04:17:34.482566    5370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:34.482695    5370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:17:34.483777    5370 out.go:352] Setting JSON to false
	I0916 04:17:34.500185    5370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4617,"bootTime":1726480837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:17:34.500254    5370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:17:34.506289    5370 out.go:177] * [custom-flannel-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:17:34.514237    5370 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:17:34.514276    5370 notify.go:220] Checking for updates...
	I0916 04:17:34.520202    5370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:17:34.523232    5370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:17:34.524801    5370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:17:34.528207    5370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:17:34.531252    5370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:17:34.534626    5370 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:17:34.534698    5370 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:17:34.534749    5370 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:17:34.539158    5370 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:17:34.546240    5370 start.go:297] selected driver: qemu2
	I0916 04:17:34.546246    5370 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:17:34.546253    5370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:17:34.548551    5370 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:17:34.551186    5370 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:17:34.554296    5370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:17:34.554311    5370 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0916 04:17:34.554318    5370 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0916 04:17:34.554352    5370 start.go:340] cluster config:
	{Name:custom-flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:17:34.557773    5370 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:17:34.565180    5370 out.go:177] * Starting "custom-flannel-725000" primary control-plane node in "custom-flannel-725000" cluster
	I0916 04:17:34.569266    5370 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:17:34.569281    5370 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:17:34.569288    5370 cache.go:56] Caching tarball of preloaded images
	I0916 04:17:34.569344    5370 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:17:34.569354    5370 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:17:34.569403    5370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/custom-flannel-725000/config.json ...
	I0916 04:17:34.569414    5370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/custom-flannel-725000/config.json: {Name:mk114e6b22ebd1fbc70a46b4d0312af8628f5914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:17:34.569625    5370 start.go:360] acquireMachinesLock for custom-flannel-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:34.569658    5370 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "custom-flannel-725000"
	I0916 04:17:34.569669    5370 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:34.569693    5370 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:34.581313    5370 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:34.596775    5370 start.go:159] libmachine.API.Create for "custom-flannel-725000" (driver="qemu2")
	I0916 04:17:34.596803    5370 client.go:168] LocalClient.Create starting
	I0916 04:17:34.596865    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:34.596895    5370 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:34.596904    5370 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:34.596945    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:34.596972    5370 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:34.596979    5370 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:34.597396    5370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:34.760414    5370 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:34.801276    5370 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:34.801282    5370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:34.801482    5370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:34.810751    5370 main.go:141] libmachine: STDOUT: 
	I0916 04:17:34.810773    5370 main.go:141] libmachine: STDERR: 
	I0916 04:17:34.810837    5370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2 +20000M
	I0916 04:17:34.818653    5370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:34.818675    5370 main.go:141] libmachine: STDERR: 
	I0916 04:17:34.818699    5370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:34.818704    5370 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:34.818715    5370 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:34.818743    5370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3e:ab:b5:ff:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:34.820366    5370 main.go:141] libmachine: STDOUT: 
	I0916 04:17:34.820379    5370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:34.820409    5370 client.go:171] duration metric: took 223.6025ms to LocalClient.Create
	I0916 04:17:36.822563    5370 start.go:128] duration metric: took 2.252886792s to createHost
	I0916 04:17:36.822633    5370 start.go:83] releasing machines lock for "custom-flannel-725000", held for 2.253009458s
	W0916 04:17:36.822719    5370 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:36.829329    5370 out.go:177] * Deleting "custom-flannel-725000" in qemu2 ...
	W0916 04:17:36.859083    5370 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:36.859117    5370 start.go:729] Will try again in 5 seconds ...
	I0916 04:17:41.859418    5370 start.go:360] acquireMachinesLock for custom-flannel-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:41.859874    5370 start.go:364] duration metric: took 347.791µs to acquireMachinesLock for "custom-flannel-725000"
	I0916 04:17:41.860012    5370 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:41.860269    5370 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:41.869897    5370 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:41.914043    5370 start.go:159] libmachine.API.Create for "custom-flannel-725000" (driver="qemu2")
	I0916 04:17:41.914101    5370 client.go:168] LocalClient.Create starting
	I0916 04:17:41.914210    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:41.914265    5370 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:41.914281    5370 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:41.914337    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:41.914381    5370 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:41.914391    5370 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:41.915360    5370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:42.083653    5370 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:42.138849    5370 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:42.138858    5370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:42.139067    5370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:42.148575    5370 main.go:141] libmachine: STDOUT: 
	I0916 04:17:42.148597    5370 main.go:141] libmachine: STDERR: 
	I0916 04:17:42.148670    5370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2 +20000M
	I0916 04:17:42.156687    5370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:42.156708    5370 main.go:141] libmachine: STDERR: 
	I0916 04:17:42.156722    5370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:42.156729    5370 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:42.156737    5370 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:42.156766    5370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e6:1b:f8:7d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/custom-flannel-725000/disk.qcow2
	I0916 04:17:42.158365    5370 main.go:141] libmachine: STDOUT: 
	I0916 04:17:42.158386    5370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:42.158399    5370 client.go:171] duration metric: took 244.297875ms to LocalClient.Create
	I0916 04:17:44.159478    5370 start.go:128] duration metric: took 2.299232041s to createHost
	I0916 04:17:44.159507    5370 start.go:83] releasing machines lock for "custom-flannel-725000", held for 2.299660833s
	W0916 04:17:44.159664    5370 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:44.166798    5370 out.go:201] 
	W0916 04:17:44.172901    5370 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:17:44.172920    5370 out.go:270] * 
	* 
	W0916 04:17:44.173616    5370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:17:44.184850    5370 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.841184959s)

                                                
                                                
-- stdout --
	* [false-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-725000" primary control-plane node in "false-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:17:46.586715    5491 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:17:46.586842    5491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:46.586845    5491 out.go:358] Setting ErrFile to fd 2...
	I0916 04:17:46.586847    5491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:46.586977    5491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:17:46.588116    5491 out.go:352] Setting JSON to false
	I0916 04:17:46.604332    5491 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4629,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:17:46.604400    5491 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:17:46.609959    5491 out.go:177] * [false-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:17:46.617829    5491 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:17:46.617894    5491 notify.go:220] Checking for updates...
	I0916 04:17:46.623848    5491 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:17:46.634432    5491 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:17:46.636049    5491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:17:46.638800    5491 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:17:46.641848    5491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:17:46.645235    5491 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:17:46.645304    5491 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:17:46.645349    5491 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:17:46.649863    5491 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:17:46.656885    5491 start.go:297] selected driver: qemu2
	I0916 04:17:46.656891    5491 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:17:46.656898    5491 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:17:46.659238    5491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:17:46.662783    5491 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:17:46.666908    5491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:17:46.666932    5491 cni.go:84] Creating CNI manager for "false"
	I0916 04:17:46.666971    5491 start.go:340] cluster config:
	{Name:false-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:17:46.670562    5491 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:17:46.678876    5491 out.go:177] * Starting "false-725000" primary control-plane node in "false-725000" cluster
	I0916 04:17:46.682847    5491 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:17:46.682864    5491 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:17:46.682879    5491 cache.go:56] Caching tarball of preloaded images
	I0916 04:17:46.682955    5491 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:17:46.682962    5491 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:17:46.683028    5491 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/false-725000/config.json ...
	I0916 04:17:46.683040    5491 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/false-725000/config.json: {Name:mk37e6b225fe6fdc903f9a6d8505f289923be5ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:17:46.683394    5491 start.go:360] acquireMachinesLock for false-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:46.683443    5491 start.go:364] duration metric: took 40.458µs to acquireMachinesLock for "false-725000"
	I0916 04:17:46.683456    5491 start.go:93] Provisioning new machine with config: &{Name:false-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:46.683490    5491 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:46.690907    5491 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:46.709112    5491 start.go:159] libmachine.API.Create for "false-725000" (driver="qemu2")
	I0916 04:17:46.709154    5491 client.go:168] LocalClient.Create starting
	I0916 04:17:46.709232    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:46.709263    5491 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:46.709273    5491 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:46.709314    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:46.709342    5491 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:46.709353    5491 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:46.709824    5491 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:46.872480    5491 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:46.931366    5491 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:46.931379    5491 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:46.931596    5491 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:46.941314    5491 main.go:141] libmachine: STDOUT: 
	I0916 04:17:46.941337    5491 main.go:141] libmachine: STDERR: 
	I0916 04:17:46.941399    5491 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2 +20000M
	I0916 04:17:46.949970    5491 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:46.949995    5491 main.go:141] libmachine: STDERR: 
	I0916 04:17:46.950020    5491 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:46.950026    5491 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:46.950040    5491 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:46.950066    5491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2f:78:17:55:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:46.951801    5491 main.go:141] libmachine: STDOUT: 
	I0916 04:17:46.951827    5491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:46.951854    5491 client.go:171] duration metric: took 242.697208ms to LocalClient.Create
	I0916 04:17:48.954073    5491 start.go:128] duration metric: took 2.270590042s to createHost
	I0916 04:17:48.954152    5491 start.go:83] releasing machines lock for "false-725000", held for 2.270745959s
	W0916 04:17:48.954188    5491 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:48.964886    5491 out.go:177] * Deleting "false-725000" in qemu2 ...
	W0916 04:17:48.991744    5491 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:48.991766    5491 start.go:729] Will try again in 5 seconds ...
	I0916 04:17:53.993998    5491 start.go:360] acquireMachinesLock for false-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:53.994385    5491 start.go:364] duration metric: took 307.792µs to acquireMachinesLock for "false-725000"
	I0916 04:17:53.994469    5491 start.go:93] Provisioning new machine with config: &{Name:false-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:53.994661    5491 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:53.999007    5491 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:54.038692    5491 start.go:159] libmachine.API.Create for "false-725000" (driver="qemu2")
	I0916 04:17:54.038740    5491 client.go:168] LocalClient.Create starting
	I0916 04:17:54.038874    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:54.038931    5491 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:54.038944    5491 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:54.038998    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:54.039037    5491 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:54.039047    5491 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:54.039674    5491 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:54.208960    5491 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:54.343758    5491 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:54.343766    5491 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:54.343964    5491 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:54.353491    5491 main.go:141] libmachine: STDOUT: 
	I0916 04:17:54.353509    5491 main.go:141] libmachine: STDERR: 
	I0916 04:17:54.353568    5491 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2 +20000M
	I0916 04:17:54.361488    5491 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:54.361505    5491 main.go:141] libmachine: STDERR: 
	I0916 04:17:54.361517    5491 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:54.361523    5491 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:54.361534    5491 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:54.361558    5491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:df:16:b6:75:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/false-725000/disk.qcow2
	I0916 04:17:54.363193    5491 main.go:141] libmachine: STDOUT: 
	I0916 04:17:54.363208    5491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:54.363225    5491 client.go:171] duration metric: took 324.479709ms to LocalClient.Create
	I0916 04:17:56.365322    5491 start.go:128] duration metric: took 2.370689958s to createHost
	I0916 04:17:56.365380    5491 start.go:83] releasing machines lock for "false-725000", held for 2.371023792s
	W0916 04:17:56.365473    5491 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:17:56.373654    5491 out.go:201] 
	W0916 04:17:56.380691    5491 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:17:56.380698    5491 out.go:270] * 
	* 
	W0916 04:17:56.381173    5491 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:17:56.387666    5491 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.86559625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-725000" primary control-plane node in "enable-default-cni-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:17:58.568827    5606 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:17:58.568954    5606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:58.568958    5606 out.go:358] Setting ErrFile to fd 2...
	I0916 04:17:58.568960    5606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:17:58.569076    5606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:17:58.570199    5606 out.go:352] Setting JSON to false
	I0916 04:17:58.586205    5606 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4641,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:17:58.586278    5606 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:17:58.591825    5606 out.go:177] * [enable-default-cni-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:17:58.599890    5606 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:17:58.599927    5606 notify.go:220] Checking for updates...
	I0916 04:17:58.606957    5606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:17:58.609837    5606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:17:58.612858    5606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:17:58.615857    5606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:17:58.618881    5606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:17:58.622251    5606 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:17:58.622312    5606 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:17:58.622366    5606 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:17:58.626800    5606 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:17:58.633836    5606 start.go:297] selected driver: qemu2
	I0916 04:17:58.633842    5606 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:17:58.633849    5606 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:17:58.636042    5606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:17:58.638876    5606 out.go:177] * Automatically selected the socket_vmnet network
	E0916 04:17:58.641940    5606 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0916 04:17:58.641953    5606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:17:58.641970    5606 cni.go:84] Creating CNI manager for "bridge"
	I0916 04:17:58.641975    5606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:17:58.641999    5606 start.go:340] cluster config:
	{Name:enable-default-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:17:58.645607    5606 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:17:58.652859    5606 out.go:177] * Starting "enable-default-cni-725000" primary control-plane node in "enable-default-cni-725000" cluster
	I0916 04:17:58.656852    5606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:17:58.656872    5606 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:17:58.656882    5606 cache.go:56] Caching tarball of preloaded images
	I0916 04:17:58.656938    5606 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:17:58.656949    5606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:17:58.657002    5606 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/enable-default-cni-725000/config.json ...
	I0916 04:17:58.657013    5606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/enable-default-cni-725000/config.json: {Name:mk4c88fb27d6bf8d5f8a34c62cde4e9bb21621e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:17:58.657245    5606 start.go:360] acquireMachinesLock for enable-default-cni-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:17:58.657280    5606 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "enable-default-cni-725000"
	I0916 04:17:58.657291    5606 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:17:58.657316    5606 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:17:58.664820    5606 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:17:58.681641    5606 start.go:159] libmachine.API.Create for "enable-default-cni-725000" (driver="qemu2")
	I0916 04:17:58.681684    5606 client.go:168] LocalClient.Create starting
	I0916 04:17:58.681756    5606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:17:58.681800    5606 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:58.681809    5606 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:58.681847    5606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:17:58.681872    5606 main.go:141] libmachine: Decoding PEM data...
	I0916 04:17:58.681879    5606 main.go:141] libmachine: Parsing certificate...
	I0916 04:17:58.682224    5606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:17:58.844491    5606 main.go:141] libmachine: Creating SSH key...
	I0916 04:17:58.943213    5606 main.go:141] libmachine: Creating Disk image...
	I0916 04:17:58.943220    5606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:17:58.943408    5606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:17:58.953058    5606 main.go:141] libmachine: STDOUT: 
	I0916 04:17:58.953082    5606 main.go:141] libmachine: STDERR: 
	I0916 04:17:58.953151    5606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2 +20000M
	I0916 04:17:58.961552    5606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:17:58.961576    5606 main.go:141] libmachine: STDERR: 
	I0916 04:17:58.961590    5606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:17:58.961616    5606 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:17:58.961628    5606 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:17:58.961653    5606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:1f:41:c3:d2:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:17:58.963297    5606 main.go:141] libmachine: STDOUT: 
	I0916 04:17:58.963311    5606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:17:58.963331    5606 client.go:171] duration metric: took 281.645709ms to LocalClient.Create
	I0916 04:18:00.966774    5606 start.go:128] duration metric: took 2.3094795s to createHost
	I0916 04:18:00.966842    5606 start.go:83] releasing machines lock for "enable-default-cni-725000", held for 2.309598208s
	W0916 04:18:00.966907    5606 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:00.973363    5606 out.go:177] * Deleting "enable-default-cni-725000" in qemu2 ...
	W0916 04:18:01.004684    5606 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:01.004709    5606 start.go:729] Will try again in 5 seconds ...
	I0916 04:18:06.006896    5606 start.go:360] acquireMachinesLock for enable-default-cni-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:06.007516    5606 start.go:364] duration metric: took 497.334µs to acquireMachinesLock for "enable-default-cni-725000"
	I0916 04:18:06.007591    5606 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:06.007824    5606 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:06.013597    5606 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:06.064448    5606 start.go:159] libmachine.API.Create for "enable-default-cni-725000" (driver="qemu2")
	I0916 04:18:06.064505    5606 client.go:168] LocalClient.Create starting
	I0916 04:18:06.064693    5606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:06.064782    5606 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:06.064805    5606 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:06.064883    5606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:06.064930    5606 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:06.064948    5606 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:06.065470    5606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:06.236996    5606 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:06.356977    5606 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:06.356987    5606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:06.357192    5606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:18:06.366515    5606 main.go:141] libmachine: STDOUT: 
	I0916 04:18:06.366534    5606 main.go:141] libmachine: STDERR: 
	I0916 04:18:06.366592    5606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2 +20000M
	I0916 04:18:06.374475    5606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:06.374494    5606 main.go:141] libmachine: STDERR: 
	I0916 04:18:06.374505    5606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:18:06.374510    5606 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:06.374519    5606 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:06.374561    5606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:14:71:42:71:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/enable-default-cni-725000/disk.qcow2
	I0916 04:18:06.376209    5606 main.go:141] libmachine: STDOUT: 
	I0916 04:18:06.376222    5606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:06.376234    5606 client.go:171] duration metric: took 311.726708ms to LocalClient.Create
	I0916 04:18:08.377389    5606 start.go:128] duration metric: took 2.369590666s to createHost
	I0916 04:18:08.377431    5606 start.go:83] releasing machines lock for "enable-default-cni-725000", held for 2.369937709s
	W0916 04:18:08.377558    5606 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:08.386831    5606 out.go:201] 
	W0916 04:18:08.390818    5606 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:18:08.390824    5606 out.go:270] * 
	* 
	W0916 04:18:08.391386    5606 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:08.398784    5606 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.90713475s)

                                                
                                                
-- stdout --
	* [flannel-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-725000" primary control-plane node in "flannel-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:18:10.615568    5715 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:18:10.615691    5715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:10.615695    5715 out.go:358] Setting ErrFile to fd 2...
	I0916 04:18:10.615697    5715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:10.615836    5715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:18:10.616955    5715 out.go:352] Setting JSON to false
	I0916 04:18:10.633402    5715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4653,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:18:10.633498    5715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:18:10.638789    5715 out.go:177] * [flannel-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:18:10.646823    5715 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:18:10.646851    5715 notify.go:220] Checking for updates...
	I0916 04:18:10.653783    5715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:18:10.655303    5715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:18:10.658714    5715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:18:10.661752    5715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:18:10.664757    5715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:18:10.668109    5715 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:18:10.668175    5715 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:18:10.668229    5715 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:18:10.672719    5715 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:18:10.679740    5715 start.go:297] selected driver: qemu2
	I0916 04:18:10.679746    5715 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:18:10.679751    5715 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:18:10.682097    5715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:18:10.685698    5715 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:18:10.688825    5715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:18:10.688844    5715 cni.go:84] Creating CNI manager for "flannel"
	I0916 04:18:10.688848    5715 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0916 04:18:10.688872    5715 start.go:340] cluster config:
	{Name:flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:18:10.692383    5715 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:18:10.699747    5715 out.go:177] * Starting "flannel-725000" primary control-plane node in "flannel-725000" cluster
	I0916 04:18:10.702723    5715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:18:10.702738    5715 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:18:10.702752    5715 cache.go:56] Caching tarball of preloaded images
	I0916 04:18:10.702819    5715 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:18:10.702824    5715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:18:10.702893    5715 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/flannel-725000/config.json ...
	I0916 04:18:10.702904    5715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/flannel-725000/config.json: {Name:mk77f33894d3c31fbd856deafbae6ca7d842bce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:18:10.703118    5715 start.go:360] acquireMachinesLock for flannel-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:10.703148    5715 start.go:364] duration metric: took 24.791µs to acquireMachinesLock for "flannel-725000"
	I0916 04:18:10.703159    5715 start.go:93] Provisioning new machine with config: &{Name:flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:10.703189    5715 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:10.710573    5715 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:10.726104    5715 start.go:159] libmachine.API.Create for "flannel-725000" (driver="qemu2")
	I0916 04:18:10.726129    5715 client.go:168] LocalClient.Create starting
	I0916 04:18:10.726198    5715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:10.726228    5715 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:10.726239    5715 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:10.726283    5715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:10.726327    5715 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:10.726338    5715 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:10.726667    5715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:11.030214    5715 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:11.111397    5715 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:11.111403    5715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:11.111606    5715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:11.121075    5715 main.go:141] libmachine: STDOUT: 
	I0916 04:18:11.121109    5715 main.go:141] libmachine: STDERR: 
	I0916 04:18:11.121173    5715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2 +20000M
	I0916 04:18:11.129145    5715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:11.129162    5715 main.go:141] libmachine: STDERR: 
	I0916 04:18:11.129175    5715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:11.129180    5715 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:11.129195    5715 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:11.129232    5715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:18:17:d7:2b:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:11.130907    5715 main.go:141] libmachine: STDOUT: 
	I0916 04:18:11.130925    5715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:11.130948    5715 client.go:171] duration metric: took 404.819208ms to LocalClient.Create
	I0916 04:18:13.133002    5715 start.go:128] duration metric: took 2.429850041s to createHost
	I0916 04:18:13.133041    5715 start.go:83] releasing machines lock for "flannel-725000", held for 2.4299355s
	W0916 04:18:13.133061    5715 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:13.138131    5715 out.go:177] * Deleting "flannel-725000" in qemu2 ...
	W0916 04:18:13.161714    5715 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:13.161724    5715 start.go:729] Will try again in 5 seconds ...
	I0916 04:18:18.163923    5715 start.go:360] acquireMachinesLock for flannel-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:18.164534    5715 start.go:364] duration metric: took 483.709µs to acquireMachinesLock for "flannel-725000"
	I0916 04:18:18.164662    5715 start.go:93] Provisioning new machine with config: &{Name:flannel-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:18.164982    5715 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:18.175566    5715 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:18.225856    5715 start.go:159] libmachine.API.Create for "flannel-725000" (driver="qemu2")
	I0916 04:18:18.225930    5715 client.go:168] LocalClient.Create starting
	I0916 04:18:18.226096    5715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:18.226174    5715 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:18.226248    5715 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:18.226314    5715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:18.226366    5715 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:18.226380    5715 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:18.227025    5715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:18.397453    5715 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:18.442319    5715 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:18.442325    5715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:18.442528    5715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:18.451888    5715 main.go:141] libmachine: STDOUT: 
	I0916 04:18:18.451905    5715 main.go:141] libmachine: STDERR: 
	I0916 04:18:18.451959    5715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2 +20000M
	I0916 04:18:18.459977    5715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:18.459993    5715 main.go:141] libmachine: STDERR: 
	I0916 04:18:18.460010    5715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:18.460016    5715 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:18.460033    5715 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:18.460058    5715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:87:5d:fe:e4:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/flannel-725000/disk.qcow2
	I0916 04:18:18.461744    5715 main.go:141] libmachine: STDOUT: 
	I0916 04:18:18.461770    5715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:18.461784    5715 client.go:171] duration metric: took 235.841459ms to LocalClient.Create
	I0916 04:18:20.462180    5715 start.go:128] duration metric: took 2.297226708s to createHost
	I0916 04:18:20.462214    5715 start.go:83] releasing machines lock for "flannel-725000", held for 2.297707708s
	W0916 04:18:20.462344    5715 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:20.470489    5715 out.go:201] 
	W0916 04:18:20.475514    5715 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:18:20.475521    5715 out.go:270] * 
	* 
	W0916 04:18:20.476034    5715 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:20.485518    5715 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E0916 04:18:27.609769    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.73074425s)

                                                
                                                
-- stdout --
	* [bridge-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-725000" primary control-plane node in "bridge-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:18:22.861316    5832 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:18:22.861469    5832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:22.861473    5832 out.go:358] Setting ErrFile to fd 2...
	I0916 04:18:22.861475    5832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:22.861594    5832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:18:22.862639    5832 out.go:352] Setting JSON to false
	I0916 04:18:22.879061    5832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4665,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:18:22.879174    5832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:18:22.884765    5832 out.go:177] * [bridge-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:18:22.890710    5832 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:18:22.890759    5832 notify.go:220] Checking for updates...
	I0916 04:18:22.897627    5832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:18:22.900700    5832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:18:22.903685    5832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:18:22.906615    5832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:18:22.909706    5832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:18:22.913023    5832 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:18:22.913092    5832 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:18:22.913143    5832 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:18:22.916702    5832 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:18:22.923731    5832 start.go:297] selected driver: qemu2
	I0916 04:18:22.923739    5832 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:18:22.923747    5832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:18:22.926017    5832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:18:22.927682    5832 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:18:22.930716    5832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:18:22.930741    5832 cni.go:84] Creating CNI manager for "bridge"
	I0916 04:18:22.930753    5832 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:18:22.930803    5832 start.go:340] cluster config:
	{Name:bridge-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:18:22.934487    5832 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:18:22.941611    5832 out.go:177] * Starting "bridge-725000" primary control-plane node in "bridge-725000" cluster
	I0916 04:18:22.945678    5832 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:18:22.945692    5832 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:18:22.945701    5832 cache.go:56] Caching tarball of preloaded images
	I0916 04:18:22.945770    5832 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:18:22.945776    5832 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:18:22.945834    5832 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/bridge-725000/config.json ...
	I0916 04:18:22.945845    5832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/bridge-725000/config.json: {Name:mk1be1ce93e69b0b6cb553f4d9f5937a137144df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:18:22.946075    5832 start.go:360] acquireMachinesLock for bridge-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:22.946121    5832 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "bridge-725000"
	I0916 04:18:22.946144    5832 start.go:93] Provisioning new machine with config: &{Name:bridge-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:22.946193    5832 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:22.953646    5832 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:22.969466    5832 start.go:159] libmachine.API.Create for "bridge-725000" (driver="qemu2")
	I0916 04:18:22.969524    5832 client.go:168] LocalClient.Create starting
	I0916 04:18:22.969590    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:22.969619    5832 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:22.969632    5832 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:22.969671    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:22.969697    5832 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:22.969712    5832 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:22.970062    5832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:23.131243    5832 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:23.205020    5832 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:23.205026    5832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:23.205213    5832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:23.214307    5832 main.go:141] libmachine: STDOUT: 
	I0916 04:18:23.214332    5832 main.go:141] libmachine: STDERR: 
	I0916 04:18:23.214391    5832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2 +20000M
	I0916 04:18:23.222448    5832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:23.222462    5832 main.go:141] libmachine: STDERR: 
	I0916 04:18:23.222476    5832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:23.222482    5832 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:23.222494    5832 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:23.222524    5832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:45:c8:19:eb:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:23.224134    5832 main.go:141] libmachine: STDOUT: 
	I0916 04:18:23.224148    5832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:23.224170    5832 client.go:171] duration metric: took 254.643125ms to LocalClient.Create
	I0916 04:18:25.226221    5832 start.go:128] duration metric: took 2.280066458s to createHost
	I0916 04:18:25.226238    5832 start.go:83] releasing machines lock for "bridge-725000", held for 2.2801475s
	W0916 04:18:25.226265    5832 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:25.233795    5832 out.go:177] * Deleting "bridge-725000" in qemu2 ...
	W0916 04:18:25.245222    5832 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:25.245229    5832 start.go:729] Will try again in 5 seconds ...
	I0916 04:18:30.247311    5832 start.go:360] acquireMachinesLock for bridge-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:30.247926    5832 start.go:364] duration metric: took 518.417µs to acquireMachinesLock for "bridge-725000"
	I0916 04:18:30.248077    5832 start.go:93] Provisioning new machine with config: &{Name:bridge-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:30.248408    5832 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:30.253974    5832 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:30.300797    5832 start.go:159] libmachine.API.Create for "bridge-725000" (driver="qemu2")
	I0916 04:18:30.300842    5832 client.go:168] LocalClient.Create starting
	I0916 04:18:30.300941    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:30.301001    5832 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:30.301015    5832 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:30.301066    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:30.301104    5832 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:30.301116    5832 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:30.301611    5832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:30.473181    5832 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:30.511146    5832 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:30.511152    5832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:30.511335    5832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:30.520591    5832 main.go:141] libmachine: STDOUT: 
	I0916 04:18:30.520612    5832 main.go:141] libmachine: STDERR: 
	I0916 04:18:30.520664    5832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2 +20000M
	I0916 04:18:30.528711    5832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:30.528730    5832 main.go:141] libmachine: STDERR: 
	I0916 04:18:30.528749    5832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:30.528755    5832 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:30.528765    5832 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:30.528798    5832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:35:8f:f3:9d:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/bridge-725000/disk.qcow2
	I0916 04:18:30.530558    5832 main.go:141] libmachine: STDOUT: 
	I0916 04:18:30.530574    5832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:30.530587    5832 client.go:171] duration metric: took 229.744417ms to LocalClient.Create
	I0916 04:18:32.532672    5832 start.go:128] duration metric: took 2.284278458s to createHost
	I0916 04:18:32.532697    5832 start.go:83] releasing machines lock for "bridge-725000", held for 2.284796083s
	W0916 04:18:32.532813    5832 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:32.539994    5832 out.go:201] 
	W0916 04:18:32.544034    5832 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:18:32.544047    5832 out.go:270] * 
	* 
	W0916 04:18:32.544534    5832 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:32.550953    5832 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-725000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.892600792s)

                                                
                                                
-- stdout --
	* [kubenet-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-725000" primary control-plane node in "kubenet-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:18:34.745703    5941 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:18:34.745825    5941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:34.745828    5941 out.go:358] Setting ErrFile to fd 2...
	I0916 04:18:34.745831    5941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:34.745959    5941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:18:34.747094    5941 out.go:352] Setting JSON to false
	I0916 04:18:34.763357    5941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4677,"bootTime":1726480837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:18:34.763426    5941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:18:34.768860    5941 out.go:177] * [kubenet-725000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:18:34.776721    5941 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:18:34.776781    5941 notify.go:220] Checking for updates...
	I0916 04:18:34.783697    5941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:18:34.786651    5941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:18:34.789686    5941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:18:34.792615    5941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:18:34.795662    5941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:18:34.799037    5941 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:18:34.799106    5941 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:18:34.799154    5941 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:18:34.803570    5941 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:18:34.815645    5941 start.go:297] selected driver: qemu2
	I0916 04:18:34.815650    5941 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:18:34.815656    5941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:18:34.817809    5941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:18:34.820662    5941 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:18:34.824837    5941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:18:34.824861    5941 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0916 04:18:34.824887    5941 start.go:340] cluster config:
	{Name:kubenet-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:18:34.828390    5941 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:18:34.836654    5941 out.go:177] * Starting "kubenet-725000" primary control-plane node in "kubenet-725000" cluster
	I0916 04:18:34.840662    5941 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:18:34.840683    5941 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:18:34.840696    5941 cache.go:56] Caching tarball of preloaded images
	I0916 04:18:34.840789    5941 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:18:34.840795    5941 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:18:34.840855    5941 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kubenet-725000/config.json ...
	I0916 04:18:34.840866    5941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/kubenet-725000/config.json: {Name:mk06264fe9b80cd2571c048675dc619d2ad1de6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:18:34.841171    5941 start.go:360] acquireMachinesLock for kubenet-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:34.841205    5941 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "kubenet-725000"
	I0916 04:18:34.841219    5941 start.go:93] Provisioning new machine with config: &{Name:kubenet-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:34.841242    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:34.849669    5941 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:34.865109    5941 start.go:159] libmachine.API.Create for "kubenet-725000" (driver="qemu2")
	I0916 04:18:34.865141    5941 client.go:168] LocalClient.Create starting
	I0916 04:18:34.865212    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:34.865253    5941 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:34.865262    5941 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:34.865299    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:34.865328    5941 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:34.865339    5941 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:34.865856    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:35.027418    5941 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:35.148376    5941 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:35.148382    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:35.148579    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:35.157871    5941 main.go:141] libmachine: STDOUT: 
	I0916 04:18:35.157896    5941 main.go:141] libmachine: STDERR: 
	I0916 04:18:35.157952    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2 +20000M
	I0916 04:18:35.165729    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:35.165745    5941 main.go:141] libmachine: STDERR: 
	I0916 04:18:35.165769    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:35.165773    5941 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:35.165784    5941 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:35.165811    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:7e:ce:dc:39:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:35.167381    5941 main.go:141] libmachine: STDOUT: 
	I0916 04:18:35.167396    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:35.167417    5941 client.go:171] duration metric: took 302.275709ms to LocalClient.Create
	I0916 04:18:37.169563    5941 start.go:128] duration metric: took 2.328337834s to createHost
	I0916 04:18:37.169629    5941 start.go:83] releasing machines lock for "kubenet-725000", held for 2.328463s
	W0916 04:18:37.169689    5941 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:37.186123    5941 out.go:177] * Deleting "kubenet-725000" in qemu2 ...
	W0916 04:18:37.216904    5941 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:37.216934    5941 start.go:729] Will try again in 5 seconds ...
	I0916 04:18:42.219099    5941 start.go:360] acquireMachinesLock for kubenet-725000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:42.219746    5941 start.go:364] duration metric: took 517.625µs to acquireMachinesLock for "kubenet-725000"
	I0916 04:18:42.219827    5941 start.go:93] Provisioning new machine with config: &{Name:kubenet-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:42.220173    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:42.225941    5941 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 04:18:42.270158    5941 start.go:159] libmachine.API.Create for "kubenet-725000" (driver="qemu2")
	I0916 04:18:42.270214    5941 client.go:168] LocalClient.Create starting
	I0916 04:18:42.270355    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:42.270437    5941 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:42.270456    5941 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:42.270518    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:42.270566    5941 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:42.270581    5941 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:42.271404    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:42.459975    5941 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:42.558784    5941 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:42.558791    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:42.558993    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:42.568394    5941 main.go:141] libmachine: STDOUT: 
	I0916 04:18:42.568413    5941 main.go:141] libmachine: STDERR: 
	I0916 04:18:42.568468    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2 +20000M
	I0916 04:18:42.576457    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:42.576478    5941 main.go:141] libmachine: STDERR: 
	I0916 04:18:42.576497    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:42.576503    5941 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:42.576510    5941 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:42.576553    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:a6:c1:13:19:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/kubenet-725000/disk.qcow2
	I0916 04:18:42.578145    5941 main.go:141] libmachine: STDOUT: 
	I0916 04:18:42.578167    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:42.578181    5941 client.go:171] duration metric: took 307.96775ms to LocalClient.Create
	I0916 04:18:44.580253    5941 start.go:128] duration metric: took 2.36010925s to createHost
	I0916 04:18:44.580289    5941 start.go:83] releasing machines lock for "kubenet-725000", held for 2.360567166s
	W0916 04:18:44.580480    5941 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:44.587872    5941 out.go:201] 
	W0916 04:18:44.590874    5941 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:18:44.590887    5941 out.go:270] * 
	* 
	W0916 04:18:44.591597    5941 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:44.602838    5941 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.837319959s)

                                                
                                                
-- stdout --
	* [old-k8s-version-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-460000" primary control-plane node in "old-k8s-version-460000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-460000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:18:46.763176    6053 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:18:46.763303    6053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:46.763306    6053 out.go:358] Setting ErrFile to fd 2...
	I0916 04:18:46.763309    6053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:18:46.763436    6053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:18:46.764506    6053 out.go:352] Setting JSON to false
	I0916 04:18:46.780653    6053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4689,"bootTime":1726480837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:18:46.780771    6053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:18:46.785480    6053 out.go:177] * [old-k8s-version-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:18:46.793579    6053 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:18:46.793665    6053 notify.go:220] Checking for updates...
	I0916 04:18:46.800487    6053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:18:46.803517    6053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:18:46.806487    6053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:18:46.809481    6053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:18:46.812526    6053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:18:46.815812    6053 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:18:46.815881    6053 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:18:46.815919    6053 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:18:46.820506    6053 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:18:46.827451    6053 start.go:297] selected driver: qemu2
	I0916 04:18:46.827456    6053 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:18:46.827462    6053 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:18:46.829784    6053 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:18:46.832499    6053 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:18:46.835573    6053 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:18:46.835591    6053 cni.go:84] Creating CNI manager for ""
	I0916 04:18:46.835613    6053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 04:18:46.835641    6053 start.go:340] cluster config:
	{Name:old-k8s-version-460000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:18:46.839559    6053 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:18:46.846506    6053 out.go:177] * Starting "old-k8s-version-460000" primary control-plane node in "old-k8s-version-460000" cluster
	I0916 04:18:46.850446    6053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 04:18:46.850467    6053 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 04:18:46.850475    6053 cache.go:56] Caching tarball of preloaded images
	I0916 04:18:46.850542    6053 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:18:46.850547    6053 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 04:18:46.850598    6053 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/old-k8s-version-460000/config.json ...
	I0916 04:18:46.850609    6053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/old-k8s-version-460000/config.json: {Name:mkbdab606bb3ec28040374e0cab423b34f733d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:18:46.850822    6053 start.go:360] acquireMachinesLock for old-k8s-version-460000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:46.850854    6053 start.go:364] duration metric: took 25.334µs to acquireMachinesLock for "old-k8s-version-460000"
	I0916 04:18:46.850865    6053 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:46.850904    6053 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:46.859345    6053 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:18:46.874739    6053 start.go:159] libmachine.API.Create for "old-k8s-version-460000" (driver="qemu2")
	I0916 04:18:46.874770    6053 client.go:168] LocalClient.Create starting
	I0916 04:18:46.874845    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:46.874876    6053 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:46.874883    6053 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:46.874921    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:46.874950    6053 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:46.874955    6053 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:46.875329    6053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:47.037460    6053 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:47.157408    6053 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:47.157414    6053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:47.157611    6053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:47.167048    6053 main.go:141] libmachine: STDOUT: 
	I0916 04:18:47.167066    6053 main.go:141] libmachine: STDERR: 
	I0916 04:18:47.167129    6053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2 +20000M
	I0916 04:18:47.174993    6053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:47.175007    6053 main.go:141] libmachine: STDERR: 
	I0916 04:18:47.175020    6053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:47.175030    6053 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:47.175042    6053 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:47.175081    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:83:88:e8:2e:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:47.176709    6053 main.go:141] libmachine: STDOUT: 
	I0916 04:18:47.176723    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:47.176744    6053 client.go:171] duration metric: took 301.974375ms to LocalClient.Create
	I0916 04:18:49.178997    6053 start.go:128] duration metric: took 2.328106209s to createHost
	I0916 04:18:49.179098    6053 start.go:83] releasing machines lock for "old-k8s-version-460000", held for 2.328280125s
	W0916 04:18:49.179184    6053 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:49.186304    6053 out.go:177] * Deleting "old-k8s-version-460000" in qemu2 ...
	W0916 04:18:49.215024    6053 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:49.215051    6053 start.go:729] Will try again in 5 seconds ...
	I0916 04:18:54.217194    6053 start.go:360] acquireMachinesLock for old-k8s-version-460000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:18:54.217606    6053 start.go:364] duration metric: took 321.958µs to acquireMachinesLock for "old-k8s-version-460000"
	I0916 04:18:54.217714    6053 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:18:54.217911    6053 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:18:54.224501    6053 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:18:54.270827    6053 start.go:159] libmachine.API.Create for "old-k8s-version-460000" (driver="qemu2")
	I0916 04:18:54.270884    6053 client.go:168] LocalClient.Create starting
	I0916 04:18:54.271024    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:18:54.271094    6053 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:54.271112    6053 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:54.271173    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:18:54.271225    6053 main.go:141] libmachine: Decoding PEM data...
	I0916 04:18:54.271238    6053 main.go:141] libmachine: Parsing certificate...
	I0916 04:18:54.271761    6053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:18:54.444173    6053 main.go:141] libmachine: Creating SSH key...
	I0916 04:18:54.519452    6053 main.go:141] libmachine: Creating Disk image...
	I0916 04:18:54.519460    6053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:18:54.519661    6053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:54.529066    6053 main.go:141] libmachine: STDOUT: 
	I0916 04:18:54.529082    6053 main.go:141] libmachine: STDERR: 
	I0916 04:18:54.529146    6053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2 +20000M
	I0916 04:18:54.537487    6053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:18:54.537509    6053 main.go:141] libmachine: STDERR: 
	I0916 04:18:54.537524    6053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:54.537527    6053 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:18:54.537536    6053 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:18:54.537574    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:91:8b:f4:ee:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:18:54.539314    6053 main.go:141] libmachine: STDOUT: 
	I0916 04:18:54.539328    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:18:54.539341    6053 client.go:171] duration metric: took 268.455ms to LocalClient.Create
	I0916 04:18:56.541369    6053 start.go:128] duration metric: took 2.323494917s to createHost
	I0916 04:18:56.541393    6053 start.go:83] releasing machines lock for "old-k8s-version-460000", held for 2.323817292s
	W0916 04:18:56.541469    6053 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:18:56.549308    6053 out.go:201] 
	W0916 04:18:56.553332    6053 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:18:56.553347    6053 out.go:270] * 
	* 
	W0916 04:18:56.553839    6053 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:18:56.562238    6053 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (34.765583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-460000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-460000 create -f testdata/busybox.yaml: exit status 1 (29.242333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-460000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-460000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (37.529875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (33.664833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-460000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-460000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-460000 describe deploy/metrics-server -n kube-system: exit status 1 (28.598ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-460000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-460000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (31.761708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.179687541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-460000" primary control-plane node in "old-k8s-version-460000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:00.244818    6106 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:00.244959    6106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:00.244962    6106 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:00.244965    6106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:00.245102    6106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:00.246114    6106 out.go:352] Setting JSON to false
	I0916 04:19:00.262911    6106 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4703,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:00.262987    6106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:00.267535    6106 out.go:177] * [old-k8s-version-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:00.273480    6106 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:00.273502    6106 notify.go:220] Checking for updates...
	I0916 04:19:00.281379    6106 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:00.284537    6106 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:00.287507    6106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:00.289026    6106 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:00.292431    6106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:00.295779    6106 config.go:182] Loaded profile config "old-k8s-version-460000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 04:19:00.299488    6106 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 04:19:00.302505    6106 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:00.306475    6106 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:19:00.313461    6106 start.go:297] selected driver: qemu2
	I0916 04:19:00.313467    6106 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:00.313515    6106 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:00.315764    6106 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:00.315790    6106 cni.go:84] Creating CNI manager for ""
	I0916 04:19:00.315811    6106 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 04:19:00.315838    6106 start.go:340] cluster config:
	{Name:old-k8s-version-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:00.319180    6106 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:00.325392    6106 out.go:177] * Starting "old-k8s-version-460000" primary control-plane node in "old-k8s-version-460000" cluster
	I0916 04:19:00.329526    6106 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 04:19:00.329539    6106 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 04:19:00.329548    6106 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:00.329600    6106 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:00.329607    6106 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 04:19:00.329653    6106 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/old-k8s-version-460000/config.json ...
	I0916 04:19:00.330066    6106 start.go:360] acquireMachinesLock for old-k8s-version-460000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:00.330093    6106 start.go:364] duration metric: took 20.584µs to acquireMachinesLock for "old-k8s-version-460000"
	I0916 04:19:00.330101    6106 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:00.330107    6106 fix.go:54] fixHost starting: 
	I0916 04:19:00.330208    6106 fix.go:112] recreateIfNeeded on old-k8s-version-460000: state=Stopped err=<nil>
	W0916 04:19:00.330217    6106 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:00.334487    6106 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-460000" ...
	I0916 04:19:00.342534    6106 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:00.342576    6106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:91:8b:f4:ee:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:19:00.344588    6106 main.go:141] libmachine: STDOUT: 
	I0916 04:19:00.344609    6106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:00.344643    6106 fix.go:56] duration metric: took 14.535583ms for fixHost
	I0916 04:19:00.344648    6106 start.go:83] releasing machines lock for "old-k8s-version-460000", held for 14.551375ms
	W0916 04:19:00.344655    6106 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:00.344701    6106 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:00.344705    6106 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:05.346859    6106 start.go:360] acquireMachinesLock for old-k8s-version-460000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:05.347148    6106 start.go:364] duration metric: took 224.459µs to acquireMachinesLock for "old-k8s-version-460000"
	I0916 04:19:05.347230    6106 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:05.347241    6106 fix.go:54] fixHost starting: 
	I0916 04:19:05.347631    6106 fix.go:112] recreateIfNeeded on old-k8s-version-460000: state=Stopped err=<nil>
	W0916 04:19:05.347647    6106 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:05.355938    6106 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-460000" ...
	I0916 04:19:05.359002    6106 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:05.359164    6106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:91:8b:f4:ee:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/old-k8s-version-460000/disk.qcow2
	I0916 04:19:05.365114    6106 main.go:141] libmachine: STDOUT: 
	I0916 04:19:05.365166    6106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:05.365224    6106 fix.go:56] duration metric: took 17.981375ms for fixHost
	I0916 04:19:05.365239    6106 start.go:83] releasing machines lock for "old-k8s-version-460000", held for 18.075792ms
	W0916 04:19:05.365394    6106 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-460000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-460000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:05.373972    6106 out.go:201] 
	W0916 04:19:05.378018    6106 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:05.378046    6106 out.go:270] * 
	* 
	W0916 04:19:05.379642    6106 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:05.387926    6106 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-460000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (46.002333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-460000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (30.358125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-460000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-460000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-460000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.903167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-460000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-460000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (30.033916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-460000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (29.733542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-460000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-460000 --alsologtostderr -v=1: exit status 83 (43.978375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-460000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-460000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:05.631574    6127 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:05.632460    6127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:05.632464    6127 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:05.632467    6127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:05.632616    6127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:05.632834    6127 out.go:352] Setting JSON to false
	I0916 04:19:05.632840    6127 mustload.go:65] Loading cluster: old-k8s-version-460000
	I0916 04:19:05.633052    6127 config.go:182] Loaded profile config "old-k8s-version-460000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 04:19:05.637832    6127 out.go:177] * The control-plane node old-k8s-version-460000 host is not running: state=Stopped
	I0916 04:19:05.640836    6127 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-460000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-460000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (29.422667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (29.421375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.867783792s)

                                                
                                                
-- stdout --
	* [no-preload-654000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-654000" primary control-plane node in "no-preload-654000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-654000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:05.954238    6144 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:05.954358    6144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:05.954362    6144 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:05.954364    6144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:05.954481    6144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:05.955561    6144 out.go:352] Setting JSON to false
	I0916 04:19:05.971932    6144 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4708,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:05.972005    6144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:05.975698    6144 out.go:177] * [no-preload-654000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:05.983594    6144 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:05.983656    6144 notify.go:220] Checking for updates...
	I0916 04:19:05.990647    6144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:05.993604    6144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:05.996637    6144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:05.999667    6144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:06.002591    6144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:06.005901    6144 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:06.005962    6144 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:19:06.006004    6144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:06.009501    6144 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:19:06.020657    6144 start.go:297] selected driver: qemu2
	I0916 04:19:06.020666    6144 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:19:06.020674    6144 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:06.022929    6144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:19:06.025533    6144 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:19:06.028639    6144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:06.028655    6144 cni.go:84] Creating CNI manager for ""
	I0916 04:19:06.028677    6144 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:06.028683    6144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:19:06.028716    6144 start.go:340] cluster config:
	{Name:no-preload-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:06.032442    6144 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.039555    6144 out.go:177] * Starting "no-preload-654000" primary control-plane node in "no-preload-654000" cluster
	I0916 04:19:06.043599    6144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:06.043666    6144 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/no-preload-654000/config.json ...
	I0916 04:19:06.043679    6144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/no-preload-654000/config.json: {Name:mkb1631ae8cd59f6bbe98cd1e03426e5e7982b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:19:06.043698    6144 cache.go:107] acquiring lock: {Name:mk757e29d8fcbb1c2f9b7cb7704e295731e3b58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043704    6144 cache.go:107] acquiring lock: {Name:mk2f75d2f24c8528c1a9b92bbf03584ee6cebfe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043758    6144 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 04:19:06.043764    6144 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.042µs
	I0916 04:19:06.043769    6144 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 04:19:06.043770    6144 cache.go:107] acquiring lock: {Name:mk85878d75c7fd336f5e868171ad79ca4fba12dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043783    6144 cache.go:107] acquiring lock: {Name:mkedd7afb3fe01a6ce97726c100e72f48241ab19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043775    6144 cache.go:107] acquiring lock: {Name:mkda0f0b402ed49bb1bdde797b2a998a3f8f187d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043863    6144 cache.go:107] acquiring lock: {Name:mkc9bd607dffe84129df5787d123a0c994da742b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043886    6144 cache.go:107] acquiring lock: {Name:mkc19a1170435a292a481f217d8e25f59d29195e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043900    6144 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 04:19:06.043930    6144 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 04:19:06.043946    6144 cache.go:107] acquiring lock: {Name:mk75d0b9efa18c7c1d1ee3e50004c6a3dfff79af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:06.043991    6144 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 04:19:06.044087    6144 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 04:19:06.044135    6144 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 04:19:06.044155    6144 start.go:360] acquireMachinesLock for no-preload-654000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:06.044173    6144 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 04:19:06.044188    6144 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "no-preload-654000"
	I0916 04:19:06.044198    6144 start.go:93] Provisioning new machine with config: &{Name:no-preload-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:06.044224    6144 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:06.044267    6144 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 04:19:06.048599    6144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:06.056134    6144 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 04:19:06.056270    6144 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 04:19:06.056959    6144 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 04:19:06.057162    6144 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 04:19:06.058081    6144 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 04:19:06.058122    6144 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 04:19:06.058159    6144 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 04:19:06.064510    6144 start.go:159] libmachine.API.Create for "no-preload-654000" (driver="qemu2")
	I0916 04:19:06.064565    6144 client.go:168] LocalClient.Create starting
	I0916 04:19:06.064660    6144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:06.064695    6144 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:06.064704    6144 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:06.064747    6144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:06.064774    6144 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:06.064782    6144 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:06.065141    6144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:06.235320    6144 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:06.289047    6144 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:06.289069    6144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:06.289281    6144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:06.298961    6144 main.go:141] libmachine: STDOUT: 
	I0916 04:19:06.298981    6144 main.go:141] libmachine: STDERR: 
	I0916 04:19:06.299049    6144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2 +20000M
	I0916 04:19:06.308019    6144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:06.308053    6144 main.go:141] libmachine: STDERR: 
	I0916 04:19:06.308078    6144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:06.308083    6144 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:06.308099    6144 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:06.308133    6144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:7a:41:fe:9a:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:06.309969    6144 main.go:141] libmachine: STDOUT: 
	I0916 04:19:06.309984    6144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:06.310004    6144 client.go:171] duration metric: took 245.4365ms to LocalClient.Create
	I0916 04:19:06.480030    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 04:19:06.486347    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 04:19:06.498169    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0916 04:19:06.499574    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 04:19:06.521306    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 04:19:06.547566    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 04:19:06.554139    6144 cache.go:162] opening:  /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0916 04:19:06.643469    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0916 04:19:06.643478    6144 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 599.714583ms
	I0916 04:19:06.643485    6144 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0916 04:19:08.310113    6144 start.go:128] duration metric: took 2.265916375s to createHost
	I0916 04:19:08.310140    6144 start.go:83] releasing machines lock for "no-preload-654000", held for 2.265992167s
	W0916 04:19:08.310166    6144 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:08.319865    6144 out.go:177] * Deleting "no-preload-654000" in qemu2 ...
	W0916 04:19:08.338976    6144 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:08.338991    6144 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:09.147544    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 04:19:09.147566    6144 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.103715875s
	I0916 04:19:09.147574    6144 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 04:19:09.347720    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 04:19:09.347741    6144 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.304043417s
	I0916 04:19:09.347751    6144 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 04:19:09.715345    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 04:19:09.715373    6144 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.671615875s
	I0916 04:19:09.715384    6144 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 04:19:09.970775    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 04:19:09.970788    6144 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 3.927168625s
	I0916 04:19:09.970794    6144 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 04:19:10.367789    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 04:19:10.367811    6144 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.324009541s
	I0916 04:19:10.367830    6144 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 04:19:13.283807    6144 cache.go:157] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 04:19:13.283823    6144 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.240182709s
	I0916 04:19:13.283838    6144 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 04:19:13.283848    6144 cache.go:87] Successfully saved all images to host disk.
	I0916 04:19:13.340968    6144 start.go:360] acquireMachinesLock for no-preload-654000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:13.341114    6144 start.go:364] duration metric: took 118.917µs to acquireMachinesLock for "no-preload-654000"
	I0916 04:19:13.341130    6144 start.go:93] Provisioning new machine with config: &{Name:no-preload-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:13.341170    6144 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:13.354393    6144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:13.370973    6144 start.go:159] libmachine.API.Create for "no-preload-654000" (driver="qemu2")
	I0916 04:19:13.371011    6144 client.go:168] LocalClient.Create starting
	I0916 04:19:13.371086    6144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:13.371133    6144 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:13.371144    6144 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:13.371187    6144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:13.371215    6144 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:13.371224    6144 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:13.371514    6144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:13.704355    6144 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:13.728791    6144 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:13.728800    6144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:13.729005    6144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:13.738885    6144 main.go:141] libmachine: STDOUT: 
	I0916 04:19:13.738905    6144 main.go:141] libmachine: STDERR: 
	I0916 04:19:13.738970    6144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2 +20000M
	I0916 04:19:13.747262    6144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:13.747292    6144 main.go:141] libmachine: STDERR: 
	I0916 04:19:13.747305    6144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:13.747311    6144 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:13.747319    6144 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:13.747359    6144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:76:8f:2f:21:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:13.749187    6144 main.go:141] libmachine: STDOUT: 
	I0916 04:19:13.749208    6144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:13.749222    6144 client.go:171] duration metric: took 378.213083ms to LocalClient.Create
	I0916 04:19:15.751374    6144 start.go:128] duration metric: took 2.410210333s to createHost
	I0916 04:19:15.751456    6144 start.go:83] releasing machines lock for "no-preload-654000", held for 2.410378875s
	W0916 04:19:15.751879    6144 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:15.761125    6144 out.go:201] 
	W0916 04:19:15.765514    6144 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:15.765573    6144 out.go:270] * 
	* 
	W0916 04:19:15.768364    6144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:15.779479    6144 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (69.479375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-654000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-654000 create -f testdata/busybox.yaml: exit status 1 (30.17225ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-654000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-654000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (30.185875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (31.027458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-654000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-654000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-654000 describe deploy/metrics-server -n kube-system: exit status 1 (26.774542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-654000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-654000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (29.894333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191275s)

                                                
                                                
-- stdout --
	* [no-preload-654000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-654000" primary control-plane node in "no-preload-654000" cluster
	* Restarting existing qemu2 VM for "no-preload-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:19.207985    6224 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:19.208114    6224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:19.208118    6224 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:19.208120    6224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:19.208236    6224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:19.209305    6224 out.go:352] Setting JSON to false
	I0916 04:19:19.226397    6224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4722,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:19.226482    6224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:19.231380    6224 out.go:177] * [no-preload-654000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:19.239450    6224 notify.go:220] Checking for updates...
	I0916 04:19:19.242342    6224 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:19.250315    6224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:19.256320    6224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:19.260328    6224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:19.264311    6224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:19.268288    6224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:19.272430    6224 config.go:182] Loaded profile config "no-preload-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:19.272704    6224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:19.277358    6224 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:19:19.284155    6224 start.go:297] selected driver: qemu2
	I0916 04:19:19.284160    6224 start.go:901] validating driver "qemu2" against &{Name:no-preload-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:19.284210    6224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:19.286479    6224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:19.286507    6224 cni.go:84] Creating CNI manager for ""
	I0916 04:19:19.286526    6224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:19.286558    6224 start.go:340] cluster config:
	{Name:no-preload-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-654000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:19.290062    6224 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.297887    6224 out.go:177] * Starting "no-preload-654000" primary control-plane node in "no-preload-654000" cluster
	I0916 04:19:19.301240    6224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:19.301322    6224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/no-preload-654000/config.json ...
	I0916 04:19:19.301361    6224 cache.go:107] acquiring lock: {Name:mk757e29d8fcbb1c2f9b7cb7704e295731e3b58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301415    6224 cache.go:107] acquiring lock: {Name:mkc19a1170435a292a481f217d8e25f59d29195e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301423    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 04:19:19.301433    6224 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.5µs
	I0916 04:19:19.301439    6224 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 04:19:19.301448    6224 cache.go:107] acquiring lock: {Name:mkc9bd607dffe84129df5787d123a0c994da742b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301455    6224 cache.go:107] acquiring lock: {Name:mkedd7afb3fe01a6ce97726c100e72f48241ab19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301482    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 04:19:19.301456    6224 cache.go:107] acquiring lock: {Name:mk2f75d2f24c8528c1a9b92bbf03584ee6cebfe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301454    6224 cache.go:107] acquiring lock: {Name:mkda0f0b402ed49bb1bdde797b2a998a3f8f187d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301505    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 04:19:19.301503    6224 cache.go:107] acquiring lock: {Name:mk75d0b9efa18c7c1d1ee3e50004c6a3dfff79af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301510    6224 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 66.292µs
	I0916 04:19:19.301514    6224 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 04:19:19.301493    6224 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 106.125µs
	I0916 04:19:19.301534    6224 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 04:19:19.301520    6224 cache.go:107] acquiring lock: {Name:mk85878d75c7fd336f5e868171ad79ca4fba12dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:19.301563    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 04:19:19.301568    6224 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 65.917µs
	I0916 04:19:19.301571    6224 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 04:19:19.301564    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0916 04:19:19.301578    6224 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 149.875µs
	I0916 04:19:19.301581    6224 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0916 04:19:19.301586    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 04:19:19.301596    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 04:19:19.301601    6224 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 82.25µs
	I0916 04:19:19.301607    6224 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 04:19:19.301613    6224 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 169.542µs
	I0916 04:19:19.301623    6224 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 04:19:19.301667    6224 cache.go:115] /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 04:19:19.301672    6224 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 248.25µs
	I0916 04:19:19.301676    6224 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 04:19:19.301684    6224 cache.go:87] Successfully saved all images to host disk.
	I0916 04:19:19.301795    6224 start.go:360] acquireMachinesLock for no-preload-654000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:19.301828    6224 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "no-preload-654000"
	I0916 04:19:19.301837    6224 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:19.301840    6224 fix.go:54] fixHost starting: 
	I0916 04:19:19.301958    6224 fix.go:112] recreateIfNeeded on no-preload-654000: state=Stopped err=<nil>
	W0916 04:19:19.301966    6224 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:19.310217    6224 out.go:177] * Restarting existing qemu2 VM for "no-preload-654000" ...
	I0916 04:19:19.314336    6224 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:19.314369    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:76:8f:2f:21:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:19.316209    6224 main.go:141] libmachine: STDOUT: 
	I0916 04:19:19.316228    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:19.316258    6224 fix.go:56] duration metric: took 14.415667ms for fixHost
	I0916 04:19:19.316262    6224 start.go:83] releasing machines lock for "no-preload-654000", held for 14.43025ms
	W0916 04:19:19.316268    6224 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:19.316303    6224 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:19.316307    6224 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:24.318450    6224 start.go:360] acquireMachinesLock for no-preload-654000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:24.318920    6224 start.go:364] duration metric: took 394.083µs to acquireMachinesLock for "no-preload-654000"
	I0916 04:19:24.319048    6224 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:24.319062    6224 fix.go:54] fixHost starting: 
	I0916 04:19:24.319568    6224 fix.go:112] recreateIfNeeded on no-preload-654000: state=Stopped err=<nil>
	W0916 04:19:24.319588    6224 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:24.325016    6224 out.go:177] * Restarting existing qemu2 VM for "no-preload-654000" ...
	I0916 04:19:24.328948    6224 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:24.329093    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:76:8f:2f:21:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/no-preload-654000/disk.qcow2
	I0916 04:19:24.337437    6224 main.go:141] libmachine: STDOUT: 
	I0916 04:19:24.337492    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:24.337571    6224 fix.go:56] duration metric: took 18.5085ms for fixHost
	I0916 04:19:24.337592    6224 start.go:83] releasing machines lock for "no-preload-654000", held for 18.646291ms
	W0916 04:19:24.337751    6224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-654000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-654000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:24.344945    6224 out.go:201] 
	W0916 04:19:24.348042    6224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:24.348072    6224 out.go:270] * 
	* 
	W0916 04:19:24.349478    6224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:24.358894    6224 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-654000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (62.08175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-654000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (32.401958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-654000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-654000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-654000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.596417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-654000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-654000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (29.909666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-654000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (29.730167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-654000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-654000 --alsologtostderr -v=1: exit status 83 (40.195792ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-654000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:24.627847    6243 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:24.628026    6243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:24.628029    6243 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:24.628032    6243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:24.628151    6243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:24.628379    6243 out.go:352] Setting JSON to false
	I0916 04:19:24.628383    6243 mustload.go:65] Loading cluster: no-preload-654000
	I0916 04:19:24.628608    6243 config.go:182] Loaded profile config "no-preload-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:24.632415    6243 out.go:177] * The control-plane node no-preload-654000 host is not running: state=Stopped
	I0916 04:19:24.635426    6243 out.go:177]   To start a cluster, run: "minikube start -p no-preload-654000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-654000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (28.963709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (29.494833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.944472792s)

                                                
                                                
-- stdout --
	* [embed-certs-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-309000" primary control-plane node in "embed-certs-309000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-309000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:24.937871    6260 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:24.938009    6260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:24.938013    6260 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:24.938015    6260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:24.938151    6260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:24.939309    6260 out.go:352] Setting JSON to false
	I0916 04:19:24.955598    6260 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4727,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:24.955698    6260 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:24.960373    6260 out.go:177] * [embed-certs-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:24.966368    6260 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:24.966462    6260 notify.go:220] Checking for updates...
	I0916 04:19:24.973356    6260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:24.976251    6260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:24.979346    6260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:24.982347    6260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:24.985237    6260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:24.988598    6260 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:24.988653    6260 config.go:182] Loaded profile config "stopped-upgrade-716000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 04:19:24.988705    6260 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:24.993311    6260 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:19:25.000379    6260 start.go:297] selected driver: qemu2
	I0916 04:19:25.000385    6260 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:19:25.000391    6260 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:25.002616    6260 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:19:25.006325    6260 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:19:25.007898    6260 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:25.007925    6260 cni.go:84] Creating CNI manager for ""
	I0916 04:19:25.007958    6260 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:25.007962    6260 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:19:25.007993    6260 start.go:340] cluster config:
	{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:25.011397    6260 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:25.018456    6260 out.go:177] * Starting "embed-certs-309000" primary control-plane node in "embed-certs-309000" cluster
	I0916 04:19:25.022273    6260 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:25.022290    6260 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:25.022301    6260 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:25.022365    6260 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:25.022371    6260 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:25.022433    6260 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/embed-certs-309000/config.json ...
	I0916 04:19:25.022443    6260 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/embed-certs-309000/config.json: {Name:mk4cfe5eafcb54f6e767319e6dbd89afb8c79a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:19:25.022655    6260 start.go:360] acquireMachinesLock for embed-certs-309000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:25.022686    6260 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "embed-certs-309000"
	I0916 04:19:25.022698    6260 start.go:93] Provisioning new machine with config: &{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:25.022727    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:25.031314    6260 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:25.046903    6260 start.go:159] libmachine.API.Create for "embed-certs-309000" (driver="qemu2")
	I0916 04:19:25.046925    6260 client.go:168] LocalClient.Create starting
	I0916 04:19:25.046998    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:25.047029    6260 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:25.047039    6260 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:25.047082    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:25.047111    6260 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:25.047123    6260 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:25.047475    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:25.211150    6260 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:25.340186    6260 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:25.340194    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:25.340416    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:25.349828    6260 main.go:141] libmachine: STDOUT: 
	I0916 04:19:25.349849    6260 main.go:141] libmachine: STDERR: 
	I0916 04:19:25.349915    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2 +20000M
	I0916 04:19:25.357740    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:25.357754    6260 main.go:141] libmachine: STDERR: 
	I0916 04:19:25.357773    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:25.357779    6260 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:25.357790    6260 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:25.357818    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:98:30:82:f2:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:25.359433    6260 main.go:141] libmachine: STDOUT: 
	I0916 04:19:25.359447    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:25.359470    6260 client.go:171] duration metric: took 312.544ms to LocalClient.Create
	I0916 04:19:27.361653    6260 start.go:128] duration metric: took 2.338941958s to createHost
	I0916 04:19:27.361768    6260 start.go:83] releasing machines lock for "embed-certs-309000", held for 2.339118208s
	W0916 04:19:27.361823    6260 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:27.377649    6260 out.go:177] * Deleting "embed-certs-309000" in qemu2 ...
	W0916 04:19:27.409074    6260 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:27.409105    6260 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:32.411167    6260 start.go:360] acquireMachinesLock for embed-certs-309000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:32.411589    6260 start.go:364] duration metric: took 301.458µs to acquireMachinesLock for "embed-certs-309000"
	I0916 04:19:32.411723    6260 start.go:93] Provisioning new machine with config: &{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:32.412049    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:32.420012    6260 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:32.471517    6260 start.go:159] libmachine.API.Create for "embed-certs-309000" (driver="qemu2")
	I0916 04:19:32.471579    6260 client.go:168] LocalClient.Create starting
	I0916 04:19:32.471693    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:32.471756    6260 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:32.471804    6260 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:32.471869    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:32.471931    6260 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:32.471943    6260 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:32.472704    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:32.644686    6260 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:32.788138    6260 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:32.788146    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:32.788343    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:32.797758    6260 main.go:141] libmachine: STDOUT: 
	I0916 04:19:32.797802    6260 main.go:141] libmachine: STDERR: 
	I0916 04:19:32.797889    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2 +20000M
	I0916 04:19:32.805713    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:32.805728    6260 main.go:141] libmachine: STDERR: 
	I0916 04:19:32.805739    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:32.805745    6260 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:32.805754    6260 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:32.805796    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c0:62:af:09:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:32.807412    6260 main.go:141] libmachine: STDOUT: 
	I0916 04:19:32.807426    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:32.807439    6260 client.go:171] duration metric: took 335.860042ms to LocalClient.Create
	I0916 04:19:34.809571    6260 start.go:128] duration metric: took 2.397538459s to createHost
	I0916 04:19:34.809643    6260 start.go:83] releasing machines lock for "embed-certs-309000", held for 2.39807575s
	W0916 04:19:34.809955    6260 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:34.820464    6260 out.go:201] 
	W0916 04:19:34.829620    6260 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:34.829678    6260 out.go:270] * 
	* 
	W0916 04:19:34.832445    6260 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:34.839551    6260 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (64.849041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.79772675s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-383000" primary control-plane node in "default-k8s-diff-port-383000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-383000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:29.673815    6280 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:29.673963    6280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:29.673966    6280 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:29.673969    6280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:29.674108    6280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:29.675223    6280 out.go:352] Setting JSON to false
	I0916 04:19:29.691365    6280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4732,"bootTime":1726480837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:29.691429    6280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:29.696289    6280 out.go:177] * [default-k8s-diff-port-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:29.704193    6280 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:29.704244    6280 notify.go:220] Checking for updates...
	I0916 04:19:29.711249    6280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:29.712824    6280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:29.716280    6280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:29.719267    6280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:29.722279    6280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:29.725648    6280 config.go:182] Loaded profile config "embed-certs-309000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:29.725708    6280 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:29.725752    6280 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:29.730214    6280 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:19:29.737248    6280 start.go:297] selected driver: qemu2
	I0916 04:19:29.737255    6280 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:19:29.737265    6280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:29.739658    6280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:19:29.742282    6280 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:19:29.745370    6280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:29.745396    6280 cni.go:84] Creating CNI manager for ""
	I0916 04:19:29.745418    6280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:29.745426    6280 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:19:29.745454    6280 start.go:340] cluster config:
	{Name:default-k8s-diff-port-383000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:29.749253    6280 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:29.756244    6280 out.go:177] * Starting "default-k8s-diff-port-383000" primary control-plane node in "default-k8s-diff-port-383000" cluster
	I0916 04:19:29.759167    6280 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:29.759188    6280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:29.759199    6280 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:29.759278    6280 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:29.759284    6280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:29.759357    6280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/default-k8s-diff-port-383000/config.json ...
	I0916 04:19:29.759369    6280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/default-k8s-diff-port-383000/config.json: {Name:mk58f7d6afde01dff04a7373ecedc60a08357e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:19:29.759594    6280 start.go:360] acquireMachinesLock for default-k8s-diff-port-383000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:29.759629    6280 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "default-k8s-diff-port-383000"
	I0916 04:19:29.759640    6280 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:29.759664    6280 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:29.768131    6280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:29.786048    6280 start.go:159] libmachine.API.Create for "default-k8s-diff-port-383000" (driver="qemu2")
	I0916 04:19:29.786083    6280 client.go:168] LocalClient.Create starting
	I0916 04:19:29.786155    6280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:29.786189    6280 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:29.786199    6280 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:29.786235    6280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:29.786262    6280 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:29.786270    6280 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:29.786767    6280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:29.976782    6280 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:30.013136    6280 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:30.013141    6280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:30.013333    6280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:30.022407    6280 main.go:141] libmachine: STDOUT: 
	I0916 04:19:30.022424    6280 main.go:141] libmachine: STDERR: 
	I0916 04:19:30.022480    6280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2 +20000M
	I0916 04:19:30.030289    6280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:30.030309    6280 main.go:141] libmachine: STDERR: 
	I0916 04:19:30.030322    6280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:30.030329    6280 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:30.030347    6280 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:30.030376    6280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:82:99:65:10:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:30.031967    6280 main.go:141] libmachine: STDOUT: 
	I0916 04:19:30.031983    6280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:30.032005    6280 client.go:171] duration metric: took 245.919708ms to LocalClient.Create
	I0916 04:19:32.034139    6280 start.go:128] duration metric: took 2.274497667s to createHost
	I0916 04:19:32.034238    6280 start.go:83] releasing machines lock for "default-k8s-diff-port-383000", held for 2.274621333s
	W0916 04:19:32.034289    6280 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:32.040411    6280 out.go:177] * Deleting "default-k8s-diff-port-383000" in qemu2 ...
	W0916 04:19:32.071402    6280 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:32.071426    6280 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:37.073488    6280 start.go:360] acquireMachinesLock for default-k8s-diff-port-383000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:37.073960    6280 start.go:364] duration metric: took 384.667µs to acquireMachinesLock for "default-k8s-diff-port-383000"
	I0916 04:19:37.074028    6280 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:37.074303    6280 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:37.080130    6280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:37.131658    6280 start.go:159] libmachine.API.Create for "default-k8s-diff-port-383000" (driver="qemu2")
	I0916 04:19:37.131710    6280 client.go:168] LocalClient.Create starting
	I0916 04:19:37.131803    6280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:37.131854    6280 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:37.131871    6280 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:37.131929    6280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:37.131959    6280 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:37.131972    6280 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:37.132489    6280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:37.315933    6280 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:37.372184    6280 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:37.372190    6280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:37.372384    6280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:37.381614    6280 main.go:141] libmachine: STDOUT: 
	I0916 04:19:37.381637    6280 main.go:141] libmachine: STDERR: 
	I0916 04:19:37.381702    6280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2 +20000M
	I0916 04:19:37.389425    6280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:37.389438    6280 main.go:141] libmachine: STDERR: 
	I0916 04:19:37.389454    6280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:37.389461    6280 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:37.389469    6280 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:37.389494    6280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e6:ad:62:ea:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:37.391090    6280 main.go:141] libmachine: STDOUT: 
	I0916 04:19:37.391105    6280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:37.391117    6280 client.go:171] duration metric: took 259.406ms to LocalClient.Create
	I0916 04:19:39.393270    6280 start.go:128] duration metric: took 2.318955292s to createHost
	I0916 04:19:39.393347    6280 start.go:83] releasing machines lock for "default-k8s-diff-port-383000", held for 2.319406666s
	W0916 04:19:39.393598    6280 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-383000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-383000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:39.406073    6280 out.go:201] 
	W0916 04:19:39.414277    6280 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:39.414314    6280 out.go:270] * 
	* 
	W0916 04:19:39.416913    6280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:39.425973    6280 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (65.85775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-309000 create -f testdata/busybox.yaml: exit status 1 (29.688833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-309000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.462833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.963458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-309000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-309000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-309000 describe deploy/metrics-server -n kube-system: exit status 1 (27.0355ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-309000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (28.6585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.789381209s)

                                                
                                                
-- stdout --
	* [embed-certs-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-309000" primary control-plane node in "embed-certs-309000" cluster
	* Restarting existing qemu2 VM for "embed-certs-309000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-309000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:38.720657    6332 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:38.720774    6332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:38.720778    6332 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:38.720780    6332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:38.720906    6332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:38.721878    6332 out.go:352] Setting JSON to false
	I0916 04:19:38.737921    6332 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4741,"bootTime":1726480837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:38.737989    6332 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:38.741969    6332 out.go:177] * [embed-certs-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:38.749955    6332 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:38.749990    6332 notify.go:220] Checking for updates...
	I0916 04:19:38.757048    6332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:38.759870    6332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:38.762973    6332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:38.765961    6332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:38.768968    6332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:38.772263    6332 config.go:182] Loaded profile config "embed-certs-309000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:38.772536    6332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:38.777004    6332 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:19:38.783969    6332 start.go:297] selected driver: qemu2
	I0916 04:19:38.783977    6332 start.go:901] validating driver "qemu2" against &{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:38.784058    6332 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:38.786529    6332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:38.786552    6332 cni.go:84] Creating CNI manager for ""
	I0916 04:19:38.786574    6332 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:38.786613    6332 start.go:340] cluster config:
	{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-309000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:38.790284    6332 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:38.797997    6332 out.go:177] * Starting "embed-certs-309000" primary control-plane node in "embed-certs-309000" cluster
	I0916 04:19:38.801914    6332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:38.801930    6332 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:38.801941    6332 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:38.802008    6332 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:38.802014    6332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:38.802075    6332 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/embed-certs-309000/config.json ...
	I0916 04:19:38.802636    6332 start.go:360] acquireMachinesLock for embed-certs-309000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:39.393486    6332 start.go:364] duration metric: took 590.840375ms to acquireMachinesLock for "embed-certs-309000"
	I0916 04:19:39.393655    6332 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:39.393686    6332 fix.go:54] fixHost starting: 
	I0916 04:19:39.394404    6332 fix.go:112] recreateIfNeeded on embed-certs-309000: state=Stopped err=<nil>
	W0916 04:19:39.394449    6332 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:39.406070    6332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-309000" ...
	I0916 04:19:39.410199    6332 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:39.410408    6332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c0:62:af:09:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:39.420382    6332 main.go:141] libmachine: STDOUT: 
	I0916 04:19:39.420461    6332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:39.420617    6332 fix.go:56] duration metric: took 26.930583ms for fixHost
	I0916 04:19:39.420648    6332 start.go:83] releasing machines lock for "embed-certs-309000", held for 27.101708ms
	W0916 04:19:39.420690    6332 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:39.420909    6332 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:39.420929    6332 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:44.423052    6332 start.go:360] acquireMachinesLock for embed-certs-309000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:44.423459    6332 start.go:364] duration metric: took 328.625µs to acquireMachinesLock for "embed-certs-309000"
	I0916 04:19:44.423598    6332 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:44.423620    6332 fix.go:54] fixHost starting: 
	I0916 04:19:44.424356    6332 fix.go:112] recreateIfNeeded on embed-certs-309000: state=Stopped err=<nil>
	W0916 04:19:44.424384    6332 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:44.432971    6332 out.go:177] * Restarting existing qemu2 VM for "embed-certs-309000" ...
	I0916 04:19:44.436980    6332 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:44.437141    6332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c0:62:af:09:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/embed-certs-309000/disk.qcow2
	I0916 04:19:44.446618    6332 main.go:141] libmachine: STDOUT: 
	I0916 04:19:44.446672    6332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:44.446775    6332 fix.go:56] duration metric: took 23.141666ms for fixHost
	I0916 04:19:44.446795    6332 start.go:83] releasing machines lock for "embed-certs-309000", held for 23.316084ms
	W0916 04:19:44.446959    6332 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-309000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-309000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:44.453923    6332 out.go:201] 
	W0916 04:19:44.457994    6332 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:44.458022    6332 out.go:270] * 
	* 
	W0916 04:19:44.460589    6332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:44.467985    6332 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (65.125375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-383000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-383000 create -f testdata/busybox.yaml: exit status 1 (29.100417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-383000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-383000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (29.504875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (28.648416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-383000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-383000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-383000 describe deploy/metrics-server -n kube-system: exit status 1 (26.952459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-383000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-383000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (28.943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.193012209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-383000" primary control-plane node in "default-k8s-diff-port-383000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-383000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-383000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:42.661351    6376 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:42.661486    6376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:42.661490    6376 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:42.661492    6376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:42.661621    6376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:42.662597    6376 out.go:352] Setting JSON to false
	I0916 04:19:42.678983    6376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4745,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:42.679063    6376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:42.682345    6376 out.go:177] * [default-k8s-diff-port-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:42.689434    6376 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:42.689467    6376 notify.go:220] Checking for updates...
	I0916 04:19:42.695298    6376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:42.698338    6376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:42.701254    6376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:42.704361    6376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:42.707352    6376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:42.709113    6376 config.go:182] Loaded profile config "default-k8s-diff-port-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:42.709354    6376 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:42.713355    6376 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:19:42.720210    6376 start.go:297] selected driver: qemu2
	I0916 04:19:42.720216    6376 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:42.720268    6376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:42.722541    6376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:19:42.722561    6376 cni.go:84] Creating CNI manager for ""
	I0916 04:19:42.722578    6376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:42.722595    6376 start.go:340] cluster config:
	{Name:default-k8s-diff-port-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-383000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:42.726029    6376 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:42.733395    6376 out.go:177] * Starting "default-k8s-diff-port-383000" primary control-plane node in "default-k8s-diff-port-383000" cluster
	I0916 04:19:42.737399    6376 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:42.737412    6376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:42.737419    6376 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:42.737468    6376 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:42.737473    6376 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:42.737520    6376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/default-k8s-diff-port-383000/config.json ...
	I0916 04:19:42.738024    6376 start.go:360] acquireMachinesLock for default-k8s-diff-port-383000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:42.738049    6376 start.go:364] duration metric: took 19.5µs to acquireMachinesLock for "default-k8s-diff-port-383000"
	I0916 04:19:42.738058    6376 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:42.738063    6376 fix.go:54] fixHost starting: 
	I0916 04:19:42.738172    6376 fix.go:112] recreateIfNeeded on default-k8s-diff-port-383000: state=Stopped err=<nil>
	W0916 04:19:42.738180    6376 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:42.741337    6376 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-383000" ...
	I0916 04:19:42.749350    6376 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:42.749391    6376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e6:ad:62:ea:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:42.752062    6376 main.go:141] libmachine: STDOUT: 
	I0916 04:19:42.752080    6376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:42.752107    6376 fix.go:56] duration metric: took 14.044583ms for fixHost
	I0916 04:19:42.752111    6376 start.go:83] releasing machines lock for "default-k8s-diff-port-383000", held for 14.056958ms
	W0916 04:19:42.752117    6376 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:42.752165    6376 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:42.752170    6376 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:47.754304    6376 start.go:360] acquireMachinesLock for default-k8s-diff-port-383000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:47.754855    6376 start.go:364] duration metric: took 419.958µs to acquireMachinesLock for "default-k8s-diff-port-383000"
	I0916 04:19:47.755021    6376 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:47.755043    6376 fix.go:54] fixHost starting: 
	I0916 04:19:47.755837    6376 fix.go:112] recreateIfNeeded on default-k8s-diff-port-383000: state=Stopped err=<nil>
	W0916 04:19:47.755867    6376 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:47.775465    6376 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-383000" ...
	I0916 04:19:47.780234    6376 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:47.780478    6376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e6:ad:62:ea:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/default-k8s-diff-port-383000/disk.qcow2
	I0916 04:19:47.790587    6376 main.go:141] libmachine: STDOUT: 
	I0916 04:19:47.790653    6376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:47.790758    6376 fix.go:56] duration metric: took 35.713083ms for fixHost
	I0916 04:19:47.790777    6376 start.go:83] releasing machines lock for "default-k8s-diff-port-383000", held for 35.897167ms
	W0916 04:19:47.790957    6376 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-383000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-383000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:47.797232    6376 out.go:201] 
	W0916 04:19:47.800378    6376 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:47.800408    6376 out.go:270] * 
	* 
	W0916 04:19:47.803148    6376 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:47.812276    6376 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-383000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (65.5175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-309000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (32.460916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-309000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.551417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.564292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-309000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.312625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-309000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-309000 --alsologtostderr -v=1: exit status 83 (41.1625ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-309000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-309000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:44.734880    6396 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:44.735073    6396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:44.735075    6396 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:44.735078    6396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:44.735219    6396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:44.735426    6396 out.go:352] Setting JSON to false
	I0916 04:19:44.735431    6396 mustload.go:65] Loading cluster: embed-certs-309000
	I0916 04:19:44.735640    6396 config.go:182] Loaded profile config "embed-certs-309000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:44.739619    6396 out.go:177] * The control-plane node embed-certs-309000 host is not running: state=Stopped
	I0916 04:19:44.743592    6396 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-309000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-309000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.753375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (29.1955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.964603458s)

                                                
                                                
-- stdout --
	* [newest-cni-580000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-580000" primary control-plane node in "newest-cni-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:45.054480    6413 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:45.054620    6413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:45.054623    6413 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:45.054625    6413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:45.054751    6413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:45.055856    6413 out.go:352] Setting JSON to false
	I0916 04:19:45.072150    6413 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4748,"bootTime":1726480837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:45.072220    6413 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:45.076738    6413 out.go:177] * [newest-cni-580000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:45.083683    6413 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:45.083732    6413 notify.go:220] Checking for updates...
	I0916 04:19:45.089595    6413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:45.092625    6413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:45.095646    6413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:45.098560    6413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:45.101617    6413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:45.105034    6413 config.go:182] Loaded profile config "default-k8s-diff-port-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:45.105099    6413 config.go:182] Loaded profile config "multinode-990000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:45.105151    6413 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:45.109520    6413 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 04:19:45.116670    6413 start.go:297] selected driver: qemu2
	I0916 04:19:45.116687    6413 start.go:901] validating driver "qemu2" against <nil>
	I0916 04:19:45.116695    6413 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:45.118963    6413 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0916 04:19:45.119003    6413 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0916 04:19:45.122634    6413 out.go:177] * Automatically selected the socket_vmnet network
	I0916 04:19:45.125633    6413 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0916 04:19:45.125647    6413 cni.go:84] Creating CNI manager for ""
	I0916 04:19:45.125675    6413 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:45.125679    6413 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:19:45.125721    6413 start.go:340] cluster config:
	{Name:newest-cni-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:45.129411    6413 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:45.134614    6413 out.go:177] * Starting "newest-cni-580000" primary control-plane node in "newest-cni-580000" cluster
	I0916 04:19:45.138655    6413 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:45.138672    6413 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:45.138689    6413 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:45.138772    6413 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:45.138778    6413 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:45.138845    6413 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/newest-cni-580000/config.json ...
	I0916 04:19:45.138857    6413 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/newest-cni-580000/config.json: {Name:mk7eace9131dbcf3a24e1d5882d11cdbe4269544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:19:45.139084    6413 start.go:360] acquireMachinesLock for newest-cni-580000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:45.139117    6413 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "newest-cni-580000"
	I0916 04:19:45.139127    6413 start.go:93] Provisioning new machine with config: &{Name:newest-cni-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:45.139166    6413 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:45.146622    6413 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:45.164065    6413 start.go:159] libmachine.API.Create for "newest-cni-580000" (driver="qemu2")
	I0916 04:19:45.164108    6413 client.go:168] LocalClient.Create starting
	I0916 04:19:45.164199    6413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:45.164243    6413 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:45.164253    6413 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:45.164293    6413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:45.164317    6413 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:45.164325    6413 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:45.164671    6413 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:45.327794    6413 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:45.468250    6413 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:45.468256    6413 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:45.468450    6413 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:45.477922    6413 main.go:141] libmachine: STDOUT: 
	I0916 04:19:45.477939    6413 main.go:141] libmachine: STDERR: 
	I0916 04:19:45.477993    6413 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2 +20000M
	I0916 04:19:45.485809    6413 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:45.485823    6413 main.go:141] libmachine: STDERR: 
	I0916 04:19:45.485843    6413 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:45.485848    6413 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:45.485862    6413 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:45.485885    6413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:aa:f1:fd:36:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:45.487479    6413 main.go:141] libmachine: STDOUT: 
	I0916 04:19:45.487496    6413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:45.487523    6413 client.go:171] duration metric: took 323.407666ms to LocalClient.Create
	I0916 04:19:47.489781    6413 start.go:128] duration metric: took 2.350565208s to createHost
	I0916 04:19:47.489868    6413 start.go:83] releasing machines lock for "newest-cni-580000", held for 2.350788041s
	W0916 04:19:47.489920    6413 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:47.500379    6413 out.go:177] * Deleting "newest-cni-580000" in qemu2 ...
	W0916 04:19:47.539948    6413 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:47.539979    6413 start.go:729] Will try again in 5 seconds ...
	I0916 04:19:52.542204    6413 start.go:360] acquireMachinesLock for newest-cni-580000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:52.542661    6413 start.go:364] duration metric: took 360.75µs to acquireMachinesLock for "newest-cni-580000"
	I0916 04:19:52.542799    6413 start.go:93] Provisioning new machine with config: &{Name:newest-cni-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:19:52.543187    6413 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 04:19:52.548780    6413 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:19:52.597828    6413 start.go:159] libmachine.API.Create for "newest-cni-580000" (driver="qemu2")
	I0916 04:19:52.597890    6413 client.go:168] LocalClient.Create starting
	I0916 04:19:52.598002    6413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/ca.pem
	I0916 04:19:52.598064    6413 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:52.598081    6413 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:52.598189    6413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1133/.minikube/certs/cert.pem
	I0916 04:19:52.598235    6413 main.go:141] libmachine: Decoding PEM data...
	I0916 04:19:52.598281    6413 main.go:141] libmachine: Parsing certificate...
	I0916 04:19:52.598873    6413 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0916 04:19:52.770496    6413 main.go:141] libmachine: Creating SSH key...
	I0916 04:19:52.912008    6413 main.go:141] libmachine: Creating Disk image...
	I0916 04:19:52.912014    6413 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 04:19:52.912222    6413 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:52.921855    6413 main.go:141] libmachine: STDOUT: 
	I0916 04:19:52.921872    6413 main.go:141] libmachine: STDERR: 
	I0916 04:19:52.921928    6413 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2 +20000M
	I0916 04:19:52.929720    6413 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 04:19:52.929737    6413 main.go:141] libmachine: STDERR: 
	I0916 04:19:52.929748    6413 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:52.929754    6413 main.go:141] libmachine: Starting QEMU VM...
	I0916 04:19:52.929769    6413 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:52.929794    6413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:f1:93:37:fd:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:52.931411    6413 main.go:141] libmachine: STDOUT: 
	I0916 04:19:52.931426    6413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:52.931448    6413 client.go:171] duration metric: took 333.55875ms to LocalClient.Create
	I0916 04:19:54.933590    6413 start.go:128] duration metric: took 2.390421708s to createHost
	I0916 04:19:54.933646    6413 start.go:83] releasing machines lock for "newest-cni-580000", held for 2.391007333s
	W0916 04:19:54.934017    6413 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:54.951817    6413 out.go:201] 
	W0916 04:19:54.954879    6413 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:54.954902    6413 out.go:270] * 
	* 
	W0916 04:19:54.957567    6413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:19:54.980316    6413 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (74.045833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-580000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-383000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (32.040125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-383000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-383000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-383000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.824958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-383000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-383000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (29.361333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-383000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (28.848291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-383000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-383000 --alsologtostderr -v=1: exit status 83 (42.445ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-383000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-383000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:48.082430    6438 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:48.082607    6438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:48.082610    6438 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:48.082613    6438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:48.082726    6438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:48.082939    6438 out.go:352] Setting JSON to false
	I0916 04:19:48.082945    6438 mustload.go:65] Loading cluster: default-k8s-diff-port-383000
	I0916 04:19:48.083146    6438 config.go:182] Loaded profile config "default-k8s-diff-port-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:48.087216    6438 out.go:177] * The control-plane node default-k8s-diff-port-383000 host is not running: state=Stopped
	I0916 04:19:48.091199    6438 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-383000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-383000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (29.178833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (28.575291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-383000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.185617417s)

                                                
                                                
-- stdout --
	* [newest-cni-580000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-580000" primary control-plane node in "newest-cni-580000" cluster
	* Restarting existing qemu2 VM for "newest-cni-580000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-580000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:19:58.407128    6486 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:19:58.407230    6486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:58.407234    6486 out.go:358] Setting ErrFile to fd 2...
	I0916 04:19:58.407237    6486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:19:58.407360    6486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:19:58.408481    6486 out.go:352] Setting JSON to false
	I0916 04:19:58.424224    6486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4761,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 04:19:58.424310    6486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:19:58.428543    6486 out.go:177] * [newest-cni-580000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 04:19:58.435588    6486 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:19:58.435650    6486 notify.go:220] Checking for updates...
	I0916 04:19:58.442535    6486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 04:19:58.445558    6486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 04:19:58.448542    6486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:19:58.451565    6486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 04:19:58.454556    6486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:19:58.457847    6486 config.go:182] Loaded profile config "newest-cni-580000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:19:58.458140    6486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:19:58.462517    6486 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 04:19:58.469546    6486 start.go:297] selected driver: qemu2
	I0916 04:19:58.469552    6486 start.go:901] validating driver "qemu2" against &{Name:newest-cni-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:58.469601    6486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:19:58.471837    6486 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0916 04:19:58.471873    6486 cni.go:84] Creating CNI manager for ""
	I0916 04:19:58.471897    6486 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:19:58.471926    6486 start.go:340] cluster config:
	{Name:newest-cni-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-580000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:19:58.475326    6486 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:19:58.482555    6486 out.go:177] * Starting "newest-cni-580000" primary control-plane node in "newest-cni-580000" cluster
	I0916 04:19:58.486576    6486 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:19:58.486599    6486 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 04:19:58.486609    6486 cache.go:56] Caching tarball of preloaded images
	I0916 04:19:58.486671    6486 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 04:19:58.486677    6486 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:19:58.486733    6486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/newest-cni-580000/config.json ...
	I0916 04:19:58.487251    6486 start.go:360] acquireMachinesLock for newest-cni-580000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:19:58.487289    6486 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "newest-cni-580000"
	I0916 04:19:58.487300    6486 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:19:58.487307    6486 fix.go:54] fixHost starting: 
	I0916 04:19:58.487446    6486 fix.go:112] recreateIfNeeded on newest-cni-580000: state=Stopped err=<nil>
	W0916 04:19:58.487457    6486 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:19:58.491418    6486 out.go:177] * Restarting existing qemu2 VM for "newest-cni-580000" ...
	I0916 04:19:58.499563    6486 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:19:58.499598    6486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:f1:93:37:fd:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:19:58.501662    6486 main.go:141] libmachine: STDOUT: 
	I0916 04:19:58.501682    6486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:19:58.501716    6486 fix.go:56] duration metric: took 14.41ms for fixHost
	I0916 04:19:58.501722    6486 start.go:83] releasing machines lock for "newest-cni-580000", held for 14.42775ms
	W0916 04:19:58.501727    6486 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:19:58.501767    6486 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:19:58.501772    6486 start.go:729] Will try again in 5 seconds ...
	I0916 04:20:03.503986    6486 start.go:360] acquireMachinesLock for newest-cni-580000: {Name:mk751074fc8f4092f9eb4cf42ed6e63cca97a7fa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:20:03.504496    6486 start.go:364] duration metric: took 383.333µs to acquireMachinesLock for "newest-cni-580000"
	I0916 04:20:03.504649    6486 start.go:96] Skipping create...Using existing machine configuration
	I0916 04:20:03.504670    6486 fix.go:54] fixHost starting: 
	I0916 04:20:03.505435    6486 fix.go:112] recreateIfNeeded on newest-cni-580000: state=Stopped err=<nil>
	W0916 04:20:03.505467    6486 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 04:20:03.516041    6486 out.go:177] * Restarting existing qemu2 VM for "newest-cni-580000" ...
	I0916 04:20:03.520023    6486 qemu.go:418] Using hvf for hardware acceleration
	I0916 04:20:03.520252    6486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:f1:93:37:fd:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19651-1133/.minikube/machines/newest-cni-580000/disk.qcow2
	I0916 04:20:03.530116    6486 main.go:141] libmachine: STDOUT: 
	I0916 04:20:03.530186    6486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 04:20:03.530295    6486 fix.go:56] duration metric: took 25.6265ms for fixHost
	I0916 04:20:03.530318    6486 start.go:83] releasing machines lock for "newest-cni-580000", held for 25.797917ms
	W0916 04:20:03.530477    6486 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-580000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-580000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 04:20:03.537878    6486 out.go:201] 
	W0916 04:20:03.542127    6486 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 04:20:03.542156    6486 out.go:270] * 
	* 
	W0916 04:20:03.544593    6486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:20:03.552038    6486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-580000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (69.136292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-580000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-580000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (30.507084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-580000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-580000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-580000 --alsologtostderr -v=1: exit status 83 (40.062833ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-580000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-580000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:20:03.738926    6501 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:20:03.739091    6501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:20:03.739094    6501 out.go:358] Setting ErrFile to fd 2...
	I0916 04:20:03.739096    6501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:20:03.739227    6501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 04:20:03.739455    6501 out.go:352] Setting JSON to false
	I0916 04:20:03.739464    6501 mustload.go:65] Loading cluster: newest-cni-580000
	I0916 04:20:03.739697    6501 config.go:182] Loaded profile config "newest-cni-580000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:20:03.742631    6501 out.go:177] * The control-plane node newest-cni-580000 host is not running: state=Stopped
	I0916 04:20:03.746553    6501 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-580000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-580000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (29.48775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-580000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (30.485375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-580000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 7.13
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 201.89
29 TestAddons/serial/Volcano 38.28
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 17.52
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 39.46
40 TestAddons/parallel/Headlamp 15.64
41 TestAddons/parallel/CloudSpanner 6.21
42 TestAddons/parallel/LocalPath 11.61
43 TestAddons/parallel/NvidiaDevicePlugin 6.22
44 TestAddons/parallel/Yakd 10.33
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.84
56 TestErrorSpam/setup 35.24
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.68
60 TestErrorSpam/unpause 0.61
61 TestErrorSpam/stop 55.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.73
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.73
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.61
73 TestFunctional/serial/CacheCmd/cache/add_local 1.56
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 1.93
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 38.81
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.66
84 TestFunctional/serial/LogsFileCmd 0.61
85 TestFunctional/serial/InvalidService 4.01
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 9.9
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.63
99 TestFunctional/parallel/SSHCmd 0.18
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.43
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
111 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.16
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.95
119 TestFunctional/parallel/ImageCommands/Setup 1.64
120 TestFunctional/parallel/DockerEnv/bash 0.28
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.15
152 TestFunctional/parallel/MountCmd/specific-port 0.79
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
154 TestFunctional/delete_echo-server_images 0.06
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.17
161 TestMultiControlPlane/serial/DeployApp 5.09
162 TestMultiControlPlane/serial/PingHostFromPods 0.76
163 TestMultiControlPlane/serial/AddWorkerNode 57.03
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
166 TestMultiControlPlane/serial/CopyFile 4.16
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 77.95
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 2.92
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1.02
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.33
277 TestNoKubernetes/serial/Stop 3.49
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
294 TestStartStop/group/old-k8s-version/serial/Stop 3.26
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 2.99
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
318 TestStartStop/group/embed-certs/serial/Stop 3.45
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.79
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.12
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-091000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-091000: exit status 85 (95.072417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |          |
	|         | -p download-only-091000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:19:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:19:44.089652    1654 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:19:44.089795    1654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:44.089799    1654 out.go:358] Setting ErrFile to fd 2...
	I0916 03:19:44.089801    1654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:44.089928    1654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	W0916 03:19:44.090020    1654 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19651-1133/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19651-1133/.minikube/config/config.json: no such file or directory
	I0916 03:19:44.091208    1654 out.go:352] Setting JSON to true
	I0916 03:19:44.108677    1654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1147,"bootTime":1726480837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:19:44.108741    1654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:19:44.115185    1654 out.go:97] [download-only-091000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:19:44.115324    1654 notify.go:220] Checking for updates...
	W0916 03:19:44.115401    1654 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 03:19:44.119191    1654 out.go:169] MINIKUBE_LOCATION=19651
	I0916 03:19:44.122162    1654 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:19:44.126205    1654 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:19:44.129152    1654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:19:44.132181    1654 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	W0916 03:19:44.138157    1654 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 03:19:44.138312    1654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:19:44.143191    1654 out.go:97] Using the qemu2 driver based on user configuration
	I0916 03:19:44.143215    1654 start.go:297] selected driver: qemu2
	I0916 03:19:44.143232    1654 start.go:901] validating driver "qemu2" against <nil>
	I0916 03:19:44.143309    1654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:19:44.146175    1654 out.go:169] Automatically selected the socket_vmnet network
	I0916 03:19:44.151865    1654 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 03:19:44.151967    1654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:19:44.152015    1654 cni.go:84] Creating CNI manager for ""
	I0916 03:19:44.152055    1654 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 03:19:44.152110    1654 start.go:340] cluster config:
	{Name:download-only-091000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-091000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:19:44.157365    1654 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:19:44.162131    1654 out.go:97] Downloading VM boot image ...
	I0916 03:19:44.162150    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso
	I0916 03:19:49.782507    1654 out.go:97] Starting "download-only-091000" primary control-plane node in "download-only-091000" cluster
	I0916 03:19:49.782534    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:49.833107    1654 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 03:19:49.833132    1654 cache.go:56] Caching tarball of preloaded images
	I0916 03:19:49.833274    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:49.837315    1654 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 03:19:49.837321    1654 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:49.908722    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 03:19:55.562965    1654 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:55.563155    1654 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:56.259093    1654 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 03:19:56.259310    1654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/download-only-091000/config.json ...
	I0916 03:19:56.259327    1654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/download-only-091000/config.json: {Name:mka9ea026540357746e2a2b0fa7705edce6bdf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:19:56.259554    1654 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:56.259743    1654 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0916 03:19:57.426573    1654 out.go:193] 
	W0916 03:19:57.436655    1654 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19651-1133/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780 0x106b45780] Decompressors:map[bz2:0x1400055dd40 gz:0x1400055dd48 tar:0x1400055dcf0 tar.bz2:0x1400055dd00 tar.gz:0x1400055dd10 tar.xz:0x1400055dd20 tar.zst:0x1400055dd30 tbz2:0x1400055dd00 tgz:0x1400055dd10 txz:0x1400055dd20 tzst:0x1400055dd30 xz:0x1400055dd60 zip:0x1400055dd70 zst:0x1400055dd68] Getters:map[file:0x140002017d0 http:0x14000670280 https:0x14000670320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 403
	W0916 03:19:57.436682    1654 out_reason.go:110] 
	W0916 03:19:57.443602    1654 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:19:57.447593    1654 out.go:193] 
	
	
	* The control-plane node download-only-091000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-091000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-091000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-172000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-172000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.13073s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-172000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-172000: exit status 85 (76.656292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |                     |
	|         | -p download-only-091000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT | 16 Sep 24 03:19 PDT |
	| delete  | -p download-only-091000        | download-only-091000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT | 16 Sep 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-172000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |                     |
	|         | -p download-only-172000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:19:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:19:57.866561    1681 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:19:57.866692    1681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:57.866695    1681 out.go:358] Setting ErrFile to fd 2...
	I0916 03:19:57.866697    1681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:57.866814    1681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:19:57.867894    1681 out.go:352] Setting JSON to true
	I0916 03:19:57.884091    1681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1160,"bootTime":1726480837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:19:57.884165    1681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:19:57.888404    1681 out.go:97] [download-only-172000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:19:57.888497    1681 notify.go:220] Checking for updates...
	I0916 03:19:57.893337    1681 out.go:169] MINIKUBE_LOCATION=19651
	I0916 03:19:57.896390    1681 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:19:57.900376    1681 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:19:57.903331    1681 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:19:57.906363    1681 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	W0916 03:19:57.912298    1681 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 03:19:57.912449    1681 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:19:57.915356    1681 out.go:97] Using the qemu2 driver based on user configuration
	I0916 03:19:57.915365    1681 start.go:297] selected driver: qemu2
	I0916 03:19:57.915375    1681 start.go:901] validating driver "qemu2" against <nil>
	I0916 03:19:57.915425    1681 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:19:57.918224    1681 out.go:169] Automatically selected the socket_vmnet network
	I0916 03:19:57.923475    1681 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 03:19:57.923627    1681 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:19:57.923647    1681 cni.go:84] Creating CNI manager for ""
	I0916 03:19:57.923670    1681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:19:57.923675    1681 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 03:19:57.923719    1681 start.go:340] cluster config:
	{Name:download-only-172000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:19:57.927218    1681 iso.go:125] acquiring lock: {Name:mk55e9fb1297ea51932361bbd0234f8c1091a697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:19:57.930354    1681 out.go:97] Starting "download-only-172000" primary control-plane node in "download-only-172000" cluster
	I0916 03:19:57.930361    1681 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:19:57.986846    1681 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 03:19:57.986864    1681 cache.go:56] Caching tarball of preloaded images
	I0916 03:19:57.987024    1681 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:19:57.992966    1681 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 03:19:57.992974    1681 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0916 03:19:58.079055    1681 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19651-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-172000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-172000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-172000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-490000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-490000: exit status 85 (61.077709ms)

                                                
                                                
-- stdout --
	* Profile "addons-490000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-490000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-490000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-490000: exit status 85 (58.370041ms)

                                                
                                                
-- stdout --
	* Profile "addons-490000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-490000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (201.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-490000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-490000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m21.889081708s)
--- PASS: TestAddons/Setup (201.89s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.802625ms
addons_test.go:905: volcano-admission stabilized in 7.840167ms
addons_test.go:913: volcano-controller stabilized in 7.860584ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7xcws" [ee0ef547-546a-4d83-8653-e093dfb5e8c3] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.010677625s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-wzzcc" [adbcd5a0-9211-42ee-b208-ca6f0e29b781] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005639209s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-p2hwv" [4ac1bc3c-4870-4cd3-9784-8f9fece26c71] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006178958s
addons_test.go:932: (dbg) Run:  kubectl --context addons-490000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-490000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-490000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [82d247f2-d23a-42ec-9467-b2f6e6dc520b] Pending
helpers_test.go:344: "test-job-nginx-0" [82d247f2-d23a-42ec-9467-b2f6e6dc520b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [82d247f2-d23a-42ec-9467-b2f6e6dc520b] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.010825834s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-490000 addons disable volcano --alsologtostderr -v=1: (10.011496291s)
--- PASS: TestAddons/serial/Volcano (38.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-490000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-490000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-490000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-490000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-490000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3fb1c098-8d5f-4ab3-994a-1d895fc18d80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3fb1c098-8d5f-4ab3-994a-1d895fc18d80] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010247334s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-490000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-490000 addons disable ingress --alsologtostderr -v=1: (7.296035542s)
--- PASS: TestAddons/parallel/Ingress (17.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ccmmq" [6aa0b57e-1dcb-4aa1-8383-60ea27c6bf9f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010996875s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-490000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-490000: (5.276759083s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.35175ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-49wsl" [a33f499e-d3e9-4aa6-a561-f8d7a17f8390] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006990959s
addons_test.go:417: (dbg) Run:  kubectl --context addons-490000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.538833ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-490000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-490000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [758ee3c6-eaf3-4cf7-b9a5-c373f3a9efb7] Pending
helpers_test.go:344: "task-pv-pod" [758ee3c6-eaf3-4cf7-b9a5-c373f3a9efb7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [758ee3c6-eaf3-4cf7-b9a5-c373f3a9efb7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0082875s
addons_test.go:590: (dbg) Run:  kubectl --context addons-490000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-490000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-490000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-490000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-490000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-490000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-490000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5be99274-770c-48ea-9627-2fef8a759dfd] Pending
helpers_test.go:344: "task-pv-pod-restore" [5be99274-770c-48ea-9627-2fef8a759dfd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5be99274-770c-48ea-9627-2fef8a759dfd] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005285208s
addons_test.go:632: (dbg) Run:  kubectl --context addons-490000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-490000 delete pod task-pv-pod-restore: (1.03237875s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-490000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-490000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-490000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.086596667s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-490000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9pktg" [a853eaf2-3d58-4675-a1ca-a65533a38382] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9pktg" [a853eaf2-3d58-4675-a1ca-a65533a38382] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.006382459s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-490000 addons disable headlamp --alsologtostderr -v=1: (5.266311916s)
--- PASS: TestAddons/parallel/Headlamp (15.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-ltstq" [9a95e01c-9978-4cb6-aa93-8fa1b2b246f7] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005482583s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-490000
--- PASS: TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-490000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-490000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0890de86-1bd1-4b93-975b-50a711d4462d] Pending
helpers_test.go:344: "test-local-path" [0890de86-1bd1-4b93-975b-50a711d4462d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0890de86-1bd1-4b93-975b-50a711d4462d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0890de86-1bd1-4b93-975b-50a711d4462d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.011326916s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-490000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 ssh "cat /opt/local-path-provisioner/pvc-160fc8de-ae25-47a5-bb4d-5584ceea0a29_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-490000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-490000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xr4jg" [4f23bf82-a6dd-44ab-af49-c86decb6acad] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010324708s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-490000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2qlmm" [9014ef34-eebd-42a6-b32f-510dfce84636] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004778417s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-490000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-490000 addons disable yakd --alsologtostderr -v=1: (5.323632375s)
--- PASS: TestAddons/parallel/Yakd (10.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-490000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-490000: (12.204297666s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-490000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-490000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-490000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.84s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.84s)

                                                
                                    
x
+
TestErrorSpam/setup (35.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-235000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-235000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 --driver=qemu2 : (35.243971792s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop: (3.192669333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop: (26.036544375s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-235000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-235000 stop: (26.033525041s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19651-1133/.minikube/files/etc/test/nested/copy/1652/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-926000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.724750041s)
--- PASS: TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-926000 --alsologtostderr -v=8: (36.7339505s)
functional_test.go:663: soft start took 36.73436875s for "functional-926000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-926000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local326560771/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache add minikube-local-cache-test:functional-926000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-926000 cache add minikube-local-cache-test:functional-926000: (1.244001166s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache delete minikube-local-cache-test:functional-926000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-926000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.325125ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 kubectl -- --context functional-926000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-926000 kubectl -- --context functional-926000 get pods: (1.930500459s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.93s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-926000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-926000 get pods: (1.02434775s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-926000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.809723584s)
functional_test.go:761: restart took 38.809827459s for "functional-926000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-926000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd35557105/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-926000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-926000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-926000: exit status 115 (126.993875ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31248 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-926000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 config get cpus: exit status 14 (29.165083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 config get cpus: exit status 14 (30.303417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-926000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-926000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2889: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-926000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (123.905ms)

                                                
                                                
-- stdout --
	* [functional-926000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:38:41.403381    2856 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:38:41.403535    2856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:38:41.403538    2856 out.go:358] Setting ErrFile to fd 2...
	I0916 03:38:41.403540    2856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:38:41.403670    2856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:38:41.404806    2856 out.go:352] Setting JSON to false
	I0916 03:38:41.422802    2856 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2284,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:38:41.422882    2856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:38:41.427811    2856 out.go:177] * [functional-926000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 03:38:41.434705    2856 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:38:41.434813    2856 notify.go:220] Checking for updates...
	I0916 03:38:41.441670    2856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:38:41.450628    2856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:38:41.454768    2856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:38:41.457797    2856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:38:41.460715    2856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:38:41.464009    2856 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:38:41.464267    2856 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:38:41.468740    2856 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 03:38:41.475676    2856 start.go:297] selected driver: qemu2
	I0916 03:38:41.475686    2856 start.go:901] validating driver "qemu2" against &{Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:38:41.475734    2856 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:38:41.482530    2856 out.go:201] 
	W0916 03:38:41.486626    2856 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 03:38:41.490716    2856 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-926000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-926000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.080792ms)

                                                
                                                
-- stdout --
	* [functional-926000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:38:41.652900    2872 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:38:41.653012    2872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:38:41.653016    2872 out.go:358] Setting ErrFile to fd 2...
	I0916 03:38:41.653018    2872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:38:41.653142    2872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
	I0916 03:38:41.654408    2872 out.go:352] Setting JSON to false
	I0916 03:38:41.671904    2872 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2284,"bootTime":1726480837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 03:38:41.672008    2872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:38:41.676605    2872 out.go:177] * [functional-926000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0916 03:38:41.683770    2872 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:38:41.683861    2872 notify.go:220] Checking for updates...
	I0916 03:38:41.690623    2872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	I0916 03:38:41.693705    2872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 03:38:41.696724    2872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:38:41.699676    2872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	I0916 03:38:41.702640    2872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:38:41.709975    2872 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:38:41.710235    2872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:38:41.714714    2872 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0916 03:38:41.721759    2872 start.go:297] selected driver: qemu2
	I0916 03:38:41.721768    2872 start.go:901] validating driver "qemu2" against &{Name:functional-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:38:41.721822    2872 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:38:41.728690    2872 out.go:201] 
	W0916 03:38:41.732753    2872 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 03:38:41.735730    2872 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d8164787-06a1-448f-bcc4-73d3ea30129f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009504459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-926000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-926000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-926000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-926000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b07630c-4bec-4a6f-ade7-819c24f5d7bd] Pending
helpers_test.go:344: "sp-pod" [4b07630c-4bec-4a6f-ade7-819c24f5d7bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b07630c-4bec-4a6f-ade7-819c24f5d7bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010658958s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-926000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-926000 delete -f testdata/storage-provisioner/pod.yaml
E0916 03:38:27.617920    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.626573    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.639925    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.663343    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.706871    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.790339    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:27.953898    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-926000 delete -f testdata/storage-provisioner/pod.yaml: (1.096920834s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-926000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb649964-0d08-4673-867f-69538a034794] Pending
E0916 03:38:28.275953    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:28.919711    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [cb649964-0d08-4673-867f-69538a034794] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0916 03:38:30.203215    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [cb649964-0d08-4673-867f-69538a034794] Running
E0916 03:38:32.766704    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007997208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-926000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh -n functional-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cp functional-926000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3531651306/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh -n functional-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh -n functional-926000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1652/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /etc/test/nested/copy/1652/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1652.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /etc/ssl/certs/1652.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1652.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /usr/share/ca-certificates/1652.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /etc/ssl/certs/16522.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /usr/share/ca-certificates/16522.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-926000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "sudo systemctl is-active crio": exit status 1 (82.600041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-926000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-926000
docker.io/kicbase/echo-server:functional-926000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-926000 image ls --format short --alsologtostderr:
I0916 03:38:44.031439    2914 out.go:345] Setting OutFile to fd 1 ...
I0916 03:38:44.031763    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.031767    2914 out.go:358] Setting ErrFile to fd 2...
I0916 03:38:44.031770    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.031897    2914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:38:44.032286    2914 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.032348    2914 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.033145    2914 ssh_runner.go:195] Run: systemctl --version
I0916 03:38:44.033153    2914 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
I0916 03:38:44.063701    2914 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-926000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-926000 | 10060c4709cef | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-926000 | af111e839c765 | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-926000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-926000 image ls --format table --alsologtostderr:
I0916 03:38:46.208588    2929 out.go:345] Setting OutFile to fd 1 ...
I0916 03:38:46.208727    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:46.208731    2929 out.go:358] Setting ErrFile to fd 2...
I0916 03:38:46.208734    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:46.208845    2929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:38:46.209318    2929 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:46.209379    2929 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:46.210187    2929 ssh_runner.go:195] Run: systemctl --version
I0916 03:38:46.210196    2929 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
I0916 03:38:46.237399    2929 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0916 03:38:48.134071    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
2024/09/16 03:38:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-926000 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"af111e839c765c077215bebc645a089dc910423aa065a87c14c02d1b39fa20fd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-926000"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f2
9dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"10060c4709cef7e68f868e8abecb1dd4369758b0b0cb5c36f0ae001803250018","repoDigests":[],"repoTags":["localhost/my-image:functional-926000"],"size":"1410000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/libr
ary/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-926000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-926000 image ls --format json --alsologtostderr:
I0916 03:38:46.137155    2927 out.go:345] Setting OutFile to fd 1 ...
I0916 03:38:46.137292    2927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:46.137295    2927 out.go:358] Setting ErrFile to fd 2...
I0916 03:38:46.137298    2927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:46.137422    2927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:38:46.137865    2927 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:46.137932    2927 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:46.138784    2927 ssh_runner.go:195] Run: systemctl --version
I0916 03:38:46.138793    2927 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
I0916 03:38:46.166672    2927 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-926000 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-926000
size: "4780000"
- id: af111e839c765c077215bebc645a089dc910423aa065a87c14c02d1b39fa20fd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-926000
size: "30"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-926000 image ls --format yaml --alsologtostderr:
I0916 03:38:44.106476    2916 out.go:345] Setting OutFile to fd 1 ...
I0916 03:38:44.106636    2916 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.106640    2916 out.go:358] Setting ErrFile to fd 2...
I0916 03:38:44.106643    2916 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.106786    2916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:38:44.107192    2916 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.107252    2916 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.108170    2916 ssh_runner.go:195] Run: systemctl --version
I0916 03:38:44.108178    2916 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
I0916 03:38:44.143103    2916 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh pgrep buildkitd: exit status 1 (67.582542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image build -t localhost/my-image:functional-926000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-926000 image build -t localhost/my-image:functional-926000 testdata/build --alsologtostderr: (1.804059792s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-926000 image build -t localhost/my-image:functional-926000 testdata/build --alsologtostderr:
I0916 03:38:44.258762    2920 out.go:345] Setting OutFile to fd 1 ...
I0916 03:38:44.258992    2920 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.258996    2920 out.go:358] Setting ErrFile to fd 2...
I0916 03:38:44.258998    2920 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:38:44.259132    2920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1133/.minikube/bin
I0916 03:38:44.259564    2920 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.260305    2920 config.go:182] Loaded profile config "functional-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:38:44.261133    2920 ssh_runner.go:195] Run: systemctl --version
I0916 03:38:44.261146    2920 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1133/.minikube/machines/functional-926000/id_rsa Username:docker}
I0916 03:38:44.301328    2920 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2311951576.tar
I0916 03:38:44.301404    2920 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 03:38:44.317326    2920 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2311951576.tar
I0916 03:38:44.319379    2920 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2311951576.tar: stat -c "%s %y" /var/lib/minikube/build/build.2311951576.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2311951576.tar': No such file or directory
I0916 03:38:44.319399    2920 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2311951576.tar --> /var/lib/minikube/build/build.2311951576.tar (3072 bytes)
I0916 03:38:44.334611    2920 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2311951576
I0916 03:38:44.338798    2920 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2311951576 -xf /var/lib/minikube/build/build.2311951576.tar
I0916 03:38:44.343731    2920 docker.go:360] Building image: /var/lib/minikube/build/build.2311951576
I0916 03:38:44.343799    2920 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-926000 /var/lib/minikube/build/build.2311951576
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:10060c4709cef7e68f868e8abecb1dd4369758b0b0cb5c36f0ae001803250018 done
#8 naming to localhost/my-image:functional-926000 done
#8 DONE 0.0s
I0916 03:38:46.016820    2920 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-926000 /var/lib/minikube/build/build.2311951576: (1.673059542s)
I0916 03:38:46.016889    2920 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2311951576
I0916 03:38:46.020793    2920 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2311951576.tar
I0916 03:38:46.024257    2920 build_images.go:217] Built localhost/my-image:functional-926000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2311951576.tar
I0916 03:38:46.024275    2920 build_images.go:133] succeeded building to: functional-926000
I0916 03:38:46.024278    2920 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.627818666s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-926000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-926000 docker-env) && out/minikube-darwin-arm64 status -p functional-926000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-926000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-926000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-926000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-5t4tq" [ab892f22-2000-4667-9107-70fc4e704051] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-5t4tq" [ab892f22-2000-4667-9107-70fc4e704051] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.009665791s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image load --daemon kicbase/echo-server:functional-926000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image load --daemon kicbase/echo-server:functional-926000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-926000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image load --daemon kicbase/echo-server:functional-926000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image save kicbase/echo-server:functional-926000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image rm kicbase/echo-server:functional-926000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-926000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 image save --daemon kicbase/echo-server:functional-926000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-926000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2729: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-926000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e4d7e42c-a649-4b13-a038-f259643c87c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e4d7e42c-a649-4b13-a038-f259643c87c5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009968459s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service list -o json
functional_test.go:1494: Took "98.963916ms" to run "out/minikube-darwin-arm64 -p functional-926000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31612
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31612
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-926000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.215.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-926000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "87.881334ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.697625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "89.080042ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.527042ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726483115945861000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726483115945861000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726483115945861000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001/test-1726483115945861000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.664459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:38 test-1726483115945861000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh cat /mount-9p/test-1726483115945861000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-926000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5d64114c-a1aa-463e-9064-23270fd00e07] Pending
helpers_test.go:344: "busybox-mount" [5d64114c-a1aa-463e-9064-23270fd00e07] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0916 03:38:37.890349    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [5d64114c-a1aa-463e-9064-23270fd00e07] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5d64114c-a1aa-463e-9064-23270fd00e07] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003123125s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-926000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1149688656/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3263558262/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.4025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3263558262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "sudo umount -f /mount-9p": exit status 1 (67.453625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-926000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3263558262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount1: exit status 1 (97.585292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount3: exit status 1 (70.0645ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-926000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-926000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-926000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1790476704/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-926000
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-926000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-926000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-574000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0916 03:39:08.617083    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:39:49.577285    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:41:11.498164    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/addons-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-574000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m57.978999125s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-574000 -- rollout status deployment/busybox: (3.4462925s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-26n8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-hb2q6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-mkvkc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-26n8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-hb2q6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-mkvkc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-26n8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-hb2q6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-mkvkc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-26n8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-26n8z -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-hb2q6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-hb2q6 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-mkvkc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-574000 -- exec busybox-7dff88458-mkvkc -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-574000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-574000 -v=7 --alsologtostderr: (56.803464625s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-574000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp testdata/cp-test.txt ha-574000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile197360452/001/cp-test_ha-574000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000:/home/docker/cp-test.txt ha-574000-m02:/home/docker/cp-test_ha-574000_ha-574000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test_ha-574000_ha-574000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000:/home/docker/cp-test.txt ha-574000-m03:/home/docker/cp-test_ha-574000_ha-574000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test_ha-574000_ha-574000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000:/home/docker/cp-test.txt ha-574000-m04:/home/docker/cp-test_ha-574000_ha-574000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test_ha-574000_ha-574000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp testdata/cp-test.txt ha-574000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile197360452/001/cp-test_ha-574000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m02:/home/docker/cp-test.txt ha-574000:/home/docker/cp-test_ha-574000-m02_ha-574000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test_ha-574000-m02_ha-574000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m02:/home/docker/cp-test.txt ha-574000-m03:/home/docker/cp-test_ha-574000-m02_ha-574000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test_ha-574000-m02_ha-574000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m02:/home/docker/cp-test.txt ha-574000-m04:/home/docker/cp-test_ha-574000-m02_ha-574000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test_ha-574000-m02_ha-574000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp testdata/cp-test.txt ha-574000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile197360452/001/cp-test_ha-574000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m03:/home/docker/cp-test.txt ha-574000:/home/docker/cp-test_ha-574000-m03_ha-574000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test_ha-574000-m03_ha-574000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m03:/home/docker/cp-test.txt ha-574000-m02:/home/docker/cp-test_ha-574000-m03_ha-574000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test_ha-574000-m03_ha-574000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m03:/home/docker/cp-test.txt ha-574000-m04:/home/docker/cp-test_ha-574000-m03_ha-574000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test_ha-574000-m03_ha-574000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp testdata/cp-test.txt ha-574000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile197360452/001/cp-test_ha-574000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m04:/home/docker/cp-test.txt ha-574000:/home/docker/cp-test_ha-574000-m04_ha-574000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000 "sudo cat /home/docker/cp-test_ha-574000-m04_ha-574000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m04:/home/docker/cp-test.txt ha-574000-m02:/home/docker/cp-test_ha-574000-m04_ha-574000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m02 "sudo cat /home/docker/cp-test_ha-574000-m04_ha-574000-m02.txt"
E0916 03:42:57.163017    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:57.169824    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:57.181549    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:42:57.205002    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 cp ha-574000-m04:/home/docker/cp-test.txt ha-574000-m03:/home/docker/cp-test_ha-574000-m04_ha-574000-m03.txt
E0916 03:42:57.246906    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m04 "sudo cat /home/docker/cp-test.txt"
E0916 03:42:57.330241    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-574000 ssh -n ha-574000-m03 "sudo cat /home/docker/cp-test_ha-574000-m04_ha-574000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0916 03:52:57.106237    1652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1133/.minikube/profiles/functional-926000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.950451625s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-579000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-579000 --output=json --user=testUser: (2.918000708s)
--- PASS: TestJSONOutput/stop/Command (2.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-487000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-487000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.846167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8fde857-4000-4a0b-b511-c93a65b00ec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-487000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d128225-43aa-45fd-92fa-0618ff6dfb44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"156dd219-bb45-497a-9c3c-b861e8b2d74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig"}}
	{"specversion":"1.0","id":"4fc2b9f0-4546-403d-a697-40ecfc7bfb50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3589b7ac-438e-49e4-8e8d-b36c85c3ab8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d40b3184-a7c8-42cb-9578-f571bbb8f5eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube"}}
	{"specversion":"1.0","id":"98de59c6-ce3b-403a-8ec6-560e5c30a7c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"154189a6-98d0-4179-8327-f9f14c7f4b13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-487000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-596000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.820709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-596000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-596000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-596000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.697666ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-596000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-596000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.606611417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.72577925s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-596000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-596000: (3.489046666s)
--- PASS: TestNoKubernetes/serial/Stop (3.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-596000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-596000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.595084ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-596000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-596000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-716000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-460000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-460000 --alsologtostderr -v=3: (3.262767875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-460000 -n old-k8s-version-460000: exit status 7 (53.096167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-460000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-654000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-654000 --alsologtostderr -v=3: (2.991191625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-654000 -n no-preload-654000: exit status 7 (47.657041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-654000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-309000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-309000 --alsologtostderr -v=3: (3.446793083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (55.719125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-309000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-383000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-383000 --alsologtostderr -v=3: (2.793903834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-383000 -n default-k8s-diff-port-383000: exit status 7 (57.748667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-383000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-580000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-580000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-580000 --alsologtostderr -v=3: (3.124201958s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-580000 -n newest-cni-580000: exit status 7 (56.011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-580000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-725000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-725000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-725000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-725000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-725000"

                                                
                                                
----------------------- debugLogs end: cilium-725000 [took: 2.339058375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-725000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-725000
--- SKIP: TestNetworkPlugins/group/cilium (2.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-544000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-544000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard